https://pve.proxmox.com/mediawiki/api.php?action=feedcontributions&user=M.almalat&feedformat=atomProxmox VE - User contributions [en]2024-03-28T19:46:10ZUser contributionsMediaWiki 1.35.11https://pve.proxmox.com/mediawiki/index.php?title=OVH&diff=11322OVH2022-04-04T09:01:47Z<p>M.almalat: </p>
<hr />
<div><br />
= Introduction =<br />
<br />
This article describes the OVH specific network settings for Proxmox VE 4.x.<br />
<br />
OVH is one of the main [https://www.proxmox.com/en/partners/hosting Proxmox VE hosting partners] and provides ready-to-use Proxmox VE images which allows you to start immediately with the installation and configuration of your virtual machines and containers.<br />
<br />
= Network configuration =<br />
<br />
In the OVH network the gateway is usually outside the IP subnet.<br />
Therefore routes to the gateways have to be added in order for them to be<br />
accessible from within containers or VMs.<br />
<br />
== Proxmox VE host ==<br />
<br />
If you use the OVH Proxmox VE template the network is preconfigured in a<br />
working condition. <br />
<br />
The IPv6 network is configured in a similar fashion as described in this article's LXC Container''<br />
section, and the IPv4 network is configured the way you'd expect, with a /24<br />
subnet. Note, that you should ''not'' copy this kind of configuration into a<br />
container. OVH suggests using a single addresses with a netmask of<br />
255.255.255.255.<br />
<br />
In order to enable IPv6 networking ''if the firewall is enabled'' you need to<br />
make sure the the [https://en.wikipedia.org/wiki/Neighbor_Discovery_Protocol neighbor discovery protocol]<br />
is allowed. For this there's an option in the firewall's options tab.<br />
Additionally there's a firewall macro called ''NeighborDiscovery'' available<br />
in case you wish to move the NDP rules to 'below' some other custom rules.<br />
<br />
See the [[Firewall#_notes_on_ipv6|Firewall notes about IPv6]] for more<br />
information.<br />
<br />
== LXC container ==<br />
<br />
=== IPv4 ===<br />
[[Image:Screen-LXC-OVH-IPv4.png|thumb]]<br />
If you have a block of IPv4 addresses you can add the IPv4 address just via GUI. But first you need to use the OVH manager to create a virtual OVH mac address for the IPv4 address you want to assign. As soon as the OVH virtual mac is created, you can add this virtual OVH mac and the IPv4 address via the Proxmox VE GUI.<br />
<br />
The GUI will add the required route to make the gateway reachable. <br />
<br />
Let's for instance assume you have the IPv4 block 1.2.3.8/30. Then your addresses range from 1.2.3.8 to 1.2.3.11. In the OVH<br />
network your gateway will always end with an octet of 254, so if your Proxmox VE host main IPv4 is 4.3.2.1, the gateway is 4.3.2.254<br />
<br />
Assume we want to assign address 1.2.3.9 to a container, you will get the following. Please note, this is shown in the screenshot.<br />
<br />
Inside a Debian container, this creates automatically the following ''/etc/network/interfaces'' file:<br />
<br />
auto lo<br />
iface lo inet loopback<br />
<br />
auto eth0<br />
iface eth0 inet static<br />
address 1.2.3.9<br />
netmask 255.255.255.255<br />
post-up ip route add 4.3.2.254 dev eth0<br />
post-up ip route add default via 4.3.2.254<br />
pre-down ip route del default via 4.3.2.254<br />
pre-down ip route del 4.3.2.254 dev eth0<br />
<br />
Some background info. This is because Debian would refuse to use a gateway that is not part of the network, and with the above configuration the network is assumed to be 1.2.3.9/32, consisting of only this one address. Therefore we need to use ''post-up'' commands to first add a route to the gateway via the same interface we're configuring (eth0), then we add the gateway (default route).<br />
<br />
=== IPv6 ===<br />
<br />
With IPv6 the situation is similar, but you cannot assign virtual mac addresses via the OVH GUI to IPv6 addresses. The easiest setup is to just also assign an IPv4 address to the container with a valid MAC, then the configuration works the same as with IPv4. You can use the regular prefix length of 64 (iow. you do ''not'' need to use the ipv6 equivalent of a 255.255.255.255 netmask).<br />
Other options include using a routed setup with the host as gateway, this is also possible with IPv4 when configured correctly. Or a routed setup with the normal gateway with proxy-ndp. For this you configure the network like you normally would if you had a valid vMAC, but enable proxy_ndp on the two bridges and add the ipv6 gateway to the neighbor proxy table of the routing bridge (vmbr1 in the default OVH proxmox template), and the container's IPv6 address to the neighbor proxy table on the outer bridge (vmbr0 in the default setup).<br />
<br />
Assuming the regular setup with a vMAC available, we get a similar result with IPv4, for instance on debian we get the following entry in ''/etc/network/interfaces'':<br />
<br />
iface vmbr0 inet6 static<br />
address 2001:1234:1234:0123::1<br />
netmask 64<br />
post-up /sbin/ip -f inet6 route add 2001:1234:1234:01ff:ff:ff:ff:ff dev vmbr0<br />
post-up /sbin/ip -f inet6 route add default via 2001:1234:1234:01ff:ff:ff:ff:ff<br />
pre-down /sbin/ip -f inet6 route del default via 2001:1234:1234:01ff:ff:ff:ff:ff<br />
pre-down /sbin/ip -f inet6 route del 2001:1234:1234:01ff:ff:ff:ff:ff dev vmbr0<br />
<br />
The IPv6 gateway address on OVH consists of the first 54 bits of your server's IP address filled it up with "XXff:ff:ff:ff:ff", with XX being the upper byte of the 4th group. (Note that this means that in the case of a number with fewer than 4 digits, for instance ''1ab'', you need to treat it as if it was padded with zeroes to 4 digits, in this case ''01ab'', so the XX would be ''01''.)<br />
<br />
== Virtual machines (QEMU) ==<br />
<br />
=== IPv4 ===<br />
If you have a block of IPv4 addresses you can use them for your virtual machines.<br />
<br />
First you need to use the OVH manager to create a virtual mac address for the address you want to assign. Then you can create the virtual machine in the Proxmox VE GUI and assign the generated mac address to your virtual network card.<br />
<br />
The network configuration files inside your virtual machine (ie. ''/etc/network/interfaces'' on Debian/Ubuntu) are the same as in the above container examples.<br />
<br />
=== IPv6 ===<br />
The same as for containers.<br />
<br />
=Links=<br />
[http://help.ovh.com/Proxmox Proxmox on help.ovh.com]<br />
<br />
[[Category:HOWTO]]</div>M.almalathttps://pve.proxmox.com/mediawiki/index.php?title=Fail2ban&diff=11320Fail2ban2022-03-31T11:27:37Z<p>M.almalat: /* Protecting the web interface with fail2ban */</p>
<hr />
<div>== Protecting the web interface with fail2ban ==<br />
<br />
<pre>apt install fail2ban</pre><br />
<br />
Create, if does not exist, the file jail.local<br />
(settings in this file takes precedence over identical settings of jail.conf. Use jail.conf as a template, copying and enabling into jail.local what you are interested in.<br />
Note that jail.conf could be overwritten by fail2ban package updates, jail.local will not)<br />
<br />
The following configuration works, unchanged, for Proxmox VE > 3.x (last test with 7.1)<br />
<br />
Add the following string to the end of this file /etc/fail2ban/jail.local :<br />
<br />
<pre><br />
[proxmox]<br />
enabled = true<br />
port = https,http,8006<br />
filter = proxmox<br />
logpath = /var/log/daemon.log<br />
maxretry = 3<br />
# 1 hour<br />
bantime = 3600<br />
</pre><br />
<br />
Create the file /etc/fail2ban/filter.d/proxmox.conf :<br />
<br />
<pre><br />
[Definition]<br />
failregex = pvedaemon\[.*authentication failure; rhost=<HOST> user=.* msg=.*<br />
ignoreregex =<br />
</pre><br />
<br />
You can test your configuration trying to GUI login with a wrong password or user, and then issue the command :<br />
<br />
<pre><br />
fail2ban-regex /var/log/daemon.log /etc/fail2ban/filter.d/proxmox.conf<br />
</pre><br />
<br />
You should have *at least* a "Failregex: 1 total" at the top of the "Results" section (and "1 matched" at the bottom)<br />
<br />
Restart fail2ban:<br />
<br />
<pre><br />
systemctl restart fail2ban<br />
</pre><br />
<br />
== Links ==<br />
* [http://www.fail2ban.org/wiki/index.php/Main_Page Fail2Ban ]<br />
* [http://extremeshok.com/blog/?page_id=810 Proxmox VE 2.1 - Harden Network]<br />
* [http://forum.proxmox.com/threads/3583-How-To-implement-Fail2Ban-on-Host Forum Post]<br />
* [http://forum.proxmox.com/threads/16156-Fail2ban-for-Proxmox-3-1 Forum Post for 3.1]<br />
<br />
[[Category: HOWTO]]</div>M.almalathttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_6.x_to_7.0&diff=11177Upgrade from 6.x to 7.02021-08-16T07:49:14Z<p>M.almalat: </p>
<hr />
<div>= Introduction =<br />
<br />
Proxmox VE 7.x introduces several new major features. You should plan the upgrade carefully, make and verify backups before beginning, and test extensively. Depending on the existing configuration, several manual steps—including some downtime—may be required.<br />
<br />
'''Note:''' A valid and tested backup is ''always'' required before starting the upgrade process. Test the backup beforehand in a test lab setup.<br />
<br />
In case the system is customized and/or uses additional packages or any other third party repositories/packages, ensure those packages are also upgraded to and compatible with Debian Bullseye.<br />
<br />
In general, there are two ways to upgrade a Proxmox VE 6.x system to Proxmox VE 7.x:<br />
<br />
*A new installation on new hardware (restoring VMs from the backup)<br />
*An in-place upgrade via apt (step-by-step)<br />
<br />
In both cases, emptying the browser cache and reloading the GUI are required after the upgrade.<br />
<br />
= New installation =<br />
<br />
* Backup all VMs and containers to an external storage (see [[Backup and Restore]]).<br />
* Backup all files in /etc ('''required:''' files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf, and anything that deviates from a default installation).<br />
* Install Proxmox VE 7.x from the ISO (this will delete all data on the existing host).<br />
* Rebuild your cluster, if applicable.<br />
* Restore the file /etc/pve/storage.cfg (this will make the external storage used for backup available).<br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if applicable).<br />
* Restore all VMs from backups (see [[Backup and Restore]]).<br />
<br />
Administrators comfortable with the command line can follow the procedure [[Bypassing backup and restore when upgrading]], if all VMs/CTs are on a single shared storage.<br />
<br />
= In-place upgrade =<br />
<br />
In-place upgrades are carried out via apt. Familiarity with apt is required to proceed with this upgrade method.<br />
<br />
== Preconditions ==<br />
<br />
* Upgraded to the latest version of Proxmox VE 6.4<br />
* Hyper-converged Ceph: upgrade the Ceph cluster to Ceph 15.2 Octopus '''before''' you start the Proxmox VE upgrade to 7.0. Follow the guide [[Ceph Nautilus to Octopus]]<br />
* Co-installed Proxmox Backup Server: see [https://pbs.proxmox.com/wiki/index.php/Upgrade_from_1.1_to_2.x the Proxmox Backup Server 1.1 to 2.x upgrade how-to]<br />
* Reliable access to all configured storage<br />
* A healthy cluster<br />
* Valid and tested backup of all VMs and CTs (in case something goes wrong)<br />
* Correct configuration of the repository<br />
* At least 4 GiB free disk space on the root mount point.<br />
* Check [[#Known_upgrade_issues|known upgrade issues]]<br />
<br />
== Testing the Upgrade ==<br />
<br />
An upgrade test can be easily performed using a standalone server. Install the Proxmox VE 6.4 ISO on some test hardware, then upgrade this installation to the latest minor version of Proxmox VE 6.4 (see [[Package repositories]]). To replicate the production setup as closely as possible, copy or create all relevant configurations to the test machine, then start the upgrade. It is also possible to install Proxmox VE 6.4 in a VM and test the upgrade in this environment.<br />
<br />
== Actions step-by-step ==<br />
<br />
The following actions need to be carried out from the command line of each Proxmox VE node in your cluster<br />
<br />
'''Perform the actions via console or ssh; preferably via console to avoid interrupted ssh connections. Do not carry out the upgrade when connected via the virtual console offered by the GUI; as this will get interrupted during the upgrade.'''<br />
<br />
Remember to ensure that a valid backup of all VMs and CTs has been created before proceeding.<br />
<br />
=== Continuously use the '''pve6to7''' checklist script ===<br />
<br />
A small checklist program named '''pve6to7''' is included in the latest Proxmox VE 6.4 packages. The program will provide hints and warnings about potential issues before, during and after the upgrade process. You can call it by executing:<br />
<br />
pve6to7<br />
<br />
To run it with '''all''' checks enabled, execute:<br />
<br />
pve6to7 --full<br />
<br />
Make sure to run the full checks at least once before the upgrade.<br />
<br />
This script only '''checks''' and reports things. By default, no changes to the system are made and thus, none of the issues will be automatically fixed.<br />
You should keep in mind that Proxmox VE can be heavily customized, so the script may not recognize all the possible problems with a particular setup!<br />
<br />
It is recommended to re-run the script after each attempt to fix an issue. This ensures that the actions taken actually fixed the respective warning.<br />
<br />
=== Move important Virtual Machines and Containers ===<br />
<br />
If any VMs and CTs need to keep running for the duration of the upgrade, migrate them away from the node that is being upgraded. A migration of a VM or CT from an older version of Proxmox VE to a newer version will always work. A migration from a newer Proxmox VE version to an older version may work, but is generally not supported. Keep this in mind when planning your cluster upgrade.<br />
<br />
=== Check Linux Network Bridge MAC ===<br />
<br />
With Proxmox VE 7, the MAC address of the Linux bridge itself may change, as noted in [[Upgrade from 6.x to 7.0#Linux Bridge MAC-Address Change]].<br />
<br />
In hosted setups, the MAC address of a host is often restricted, to avoid spoofing by other hosts.<br />
<br />
==== Solution A: Use ifupdown2 ====<br />
<br />
The ifupdown2 package, which Proxmox ships in the Proxmox VE 7.x repository, was adapted with a new policy configuration, so that it always derives the MAC address from the bridge port.<br />
<br />
If you're already using ifupdown2 with Proxmox VE 6.4, and you upgrade to Proxmox VE 7.x, the ifupdown2 version <code>3.1.0-1+pmx1</code> (or newer) will ensure that you do not need to adapt anything else.<br />
<br />
==== Solution B: Hardcode MAC Address ====<br />
<br />
You can either tell your hosting provider the new (additional) bridge MAC address of your Proxmox VE host, or you need to explicitly configure the bridge to keep using the old MAC address.<br />
<br />
You can get the MAC address of all network devices, using the command <code>ip -c link</code>.<br />
Then, edit your network configuration at <code>/etc/network/interfaces</code>, adding a <code>hwaddress MAC</code> line to the respective bridge section.<br />
<br />
For example, by default, the main bridge is called <code>vmbr0</code>, so the change would look like:<br />
<br />
auto vmbr0<br />
iface vmbr0 inet static<br />
address 192.168.X.Y/24<br />
hwaddress aa:bb:cc:12:34<br />
# ... remaining options<br />
<br />
If ifupdown2 is installed, you can use <code>ifreload -a</code> to apply this change. For the legacy ifupdown, <code>ifreload</code> is not available, so you either need to reboot or use <code>ifdown vmbr0; ifup vmbr0</code> (enter both semi-colon separated commands in one go!).<br />
<br />
Note, hard-coding the MAC requires manual adaptions, if you ever change your physical NIC.<br />
<br />
=== Update the configured APT repositories ===<br />
<br />
First, make sure that the system is using the latest Proxmox VE 6.4 packages:<br />
<br />
apt update<br />
apt dist-upgrade<br />
<br />
Update all Debian repository entries to Bullseye.<br />
<br />
sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list<br />
<br />
Note that Debian changed its security update repo from <code>deb http://security.debian.org buster/updates main</code> to <code>deb http://security.debian.org bullseye-security main</code> for the sake of consistency.<br />
The above command accounts for that change already.<br />
<br />
You must also disable all Proxmox VE 6.x repositories, including the pve-enterprise repository, the pve-no-subscription repository and the pvetest repository. Use the # symbol to comment out these repositories in the /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list files.<br />
See [[Package_Repositories]]<br />
<br />
=== Add the Proxmox VE 7 Package Repository ===<br />
<br />
echo "deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list<br />
<br />
For the no-subscription repository, see [[Package Repositories]]. Rather than commenting out/removing the PVE 6.x repositories, as was previously mentioned, you could also run the following command to update to the Proxmox VE 7 repositories:<br />
sed -i -e 's/buster/bullseye/g' /etc/apt/sources.list.d/pve-install-repo.list <br />
<br />
'''(Ceph only)''' Replace ceph.com repositories with proxmox.com ceph repositories<br />
<br />
echo "deb http://download.proxmox.com/debian/ceph-octopus bullseye main" > /etc/apt/sources.list.d/ceph.list<br />
<br />
If there is a backports line, remove it - the upgrade has not been tested with packages from the backports repository installed.<br />
<br />
Update the repositories' data:<br />
<br />
apt update<br />
<br />
=== Upgrade the system to Debian Bullseye and Proxmox VE 7.0 ===<br />
<br />
Note that the time required for finishing this step heavily depends on the system's performance, especially the root filesystem's IOPS and bandwidth.<br />
A slow spinner can take up to 60 minutes or more, while for a high-performance server with SSD storage, the dist-upgrade can be finished in 5 minutes.<br />
<br />
Start with this step, to get the initial set of upgraded packages:<br />
<br />
apt dist-upgrade<br />
<br />
During the above step, you may be asked to approve some new packages, that want to replace certain configuration files. These are not relevant to the Proxmox VE upgrade, so you can choose what's most appropriate for your setup.<br />
<br />
If the command exits successfully, you can reboot the system in order to use the new PVE kernel.<br />
<br />
== After the Proxmox VE upgrade ==<br />
<br />
=== For Clusters ===<br />
<br />
* Check that all nodes are up and running on the latest package versions. <!-- TODO what else --><br />
<br />
=== For Hyper-converged Ceph ===<br />
<br />
Now you can upgrade the Ceph cluster to the Pacific release, following the article [[Ceph Octopus to Pacific]].<br />
Note that while an upgrade is recommended, it's not strictly necessary. Ceph Octopus will be supported in Proxmox VE 7.x, until it's end-of-life circa end of 2022/Q2.<br />
<br />
== Checklist issues ==<br />
<br />
=== proxmox-ve package is too old ===<br />
<br />
Check the configured package repository entries; they still need to be for Proxmox VE 6.x and buster at this step (see [[Package_Repositories]]). Then run <br />
<br />
apt update<br />
<br />
followed by<br />
<br />
apt dist-upgrade<br />
<br />
to get the latest PVE 6.x packages '''before''' upgrading to PVE 7.x<br />
<br />
== Known upgrade issues ==<br />
<br />
=== General ===<br />
<br />
As a Debian based distribution, Proxmox VE is affected by most issues and changes affecting Debian.<br />
Thus, ensure that you read the [https://www.debian.org/releases/bullseye/amd64/release-notes/ch-information.en.html upgrade specific issues for bullseye]<br />
<br />
Please also check the known issue list from the Proxmox VE 7.0 changelog: https://pve.proxmox.com/wiki/Roadmap#7.0-known-issues<br />
<br />
==== Upgrade wants to remove package 'proxmox-ve' ====<br />
<br />
If you have installed Proxmox VE on top of Debian Buster, you may have installed the package 'linux-image-amd64', which conflicts with current 6.x setups. To solve this, you have to remove this package with<br />
apt remove linux-image-amd64<br />
before the dist-upgrade.<br />
<br />
==== No 'root' password set ====<br />
<br />
The root account must have a password set (that you remember).<br />
If not, the <code>sudo</code> package will be uninstalled during the upgrade, and so you will not be able to log in again as root.<br />
<br />
If you used the official Proxmox VE or Debian installer, and you didn't remove the password after the installation, you are safe.<br />
<br />
=== Third-party Storage Plugins ===<br />
<br />
The external, third-party storage plugin mechanism had a ABI-version bump that reset the ABI-age.<br />
This means there was an incompatible breaking change, that external plugins must adapt before being able to get loaded again.<br />
<br />
If you use any external storage plugin you need to wait until the plugin author adapted it for Proxmox VE 7.0.<br />
<br />
=== Network ===<br />
<br />
==== Linux Bridge MAC-Address Change ====<br />
<br />
With Proxmox VE 7 / Debian Bullseye, a new systemd version is used, that changes how the MAC addresses of Linux network bridge devices are calculated:<br />
<blockquote><br />
MACAddressPolicy=persistent was extended to set MAC addresses based on the device name.<br />
Previously addresses were only based on the ID_NET_NAME_* attributes, which meant that interface names would never be generated for virtual devices.<br />
Now a persistent address will be generated for most devices, including in particular bridges.<br />
<br />
-- https://www.freedesktop.org/software/systemd/man/systemd.net-naming-scheme.html#v241<br />
</blockquote><br />
<br />
A unique and persistent MAC address is now calculated using the bridge name and the unique machine-id (<code>/etc/machine-id</code>), which is generated at install time.<br />
<br />
Please either ensure that any ebtable or similar rules that use the previous bridge MAC-Address are updated or configure the desired bridge MAC-Address explicitly, by switching to ifupdown2 and adding <code>hwaddress</code> to the respective entry in <code>/etc/network/interfaces</code>.<br />
<br />
=== Older Virtual Machines with Windows and Static Network ===<br />
<br />
Since QEMU 5.2, first introduced in Proxmox VE 6.4, the way QEMU sets the ACPI-ID for PCI devices changed to conform to standards. This led to some Windows guests loosing their device configuration, as they detect the re-ordered devices as new ones.<br />
<br />
Due to this Proxmox VE will now pin the machine-version for windows-based guests to the newest available on guest creation, or the minimum of (5.2, latest-available) for existing one.<br />
You can also easily change the machine-version through the web-interface now. See [https://forum.proxmox.com/threads/warning-latest-patch-just-broke-all-my-windows-vms-6-3-4-patch-inside-see-53.84915/page-2#post-373380 this forum thread] for further information.<br />
<br />
Note that if you have already upgraded to Proxmox VE 6.4, your system has implemented this change already, so you can ignore it.<br />
<br />
=== Old Container and CGroupv2 ===<br />
<br />
Since Proxmox VE 7.0, the default is a pure cgroupv2 environment.<br />
Previously a "hybrid" setup was used, where resource control was mainly done in cgroupv1 with an additional cgroupv2 controller which could take over some subsystems via the cgroup_no_v1 kernel command line parameter. (See the [https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html kernel parameter documentation] for details.)<br />
<br />
cgroupv2 support by the container’s OS is needed to run in a pure cgroupv2 environment. Containers running systemd version 231 (released in 2016) or newer support cgroupv2, as do containers that do not use systemd as init system in the first place (e.g., Alpine Linux or Devuan).<br />
<br />
CentOS 7 and Ubuntu 16.10 are two prominent Linux distributions releases, which have a systemd version that is too old to run in a cgroupv2 environment, for details and possible fixes see:<br />
https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat<br />
<br />
== Troubleshooting ==<br />
<br />
=== Failing upgrade to "bullseye" ===<br />
<br />
Make sure that the repository configuration for Bullseye is correct.<br />
<br />
If there was a network failure and the upgrade was only partially completed, try to repair the situation with <br />
<br />
apt -f install<br />
<br />
If you see the following message:<br />
<br />
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!<br />
<br />
then one or more of the currently existing packages cannot be upgraded since the proper Bullseye repository is not configured.<br />
<br />
Check which of the previously used repositories (i.e. for Buster) do not exist for Bullseye or have not been upgraded to Bullseye ones. <br />
<br />
If a corresponding Bullseye repository exists, upgrade the configuration (see also [[#Add the Proxmox VE 7 Package Repository| special remark for Ceph]]).<br />
<br />
If an upgrade is not possible, configure all repositories as they were before the upgrade attempt, then run: <br />
<br />
apt update<br />
<br />
again. Then remove '''all''' packages which are currently installed from that repository. Following this, start the [[#Add the Proxmox VE 7 Package Repository|upgrade procedure]] again.<br />
<br />
=== Unable to boot due to grub failure ===<br />
<br />
See [[Recover From Grub Failure]]<br />
<br />
If your system was installed on ZFS using legacy BIOS boot before the Proxmox VE 6.4 ISO, incompatibilities between the ZFS implementation in grub and newer ZFS versions can lead to a broken boot.<br />
Check the article on switching to <code>proxmox-boot-tool</code> [[ZFS: Switch Legacy-Boot to Proxmox Boot Tool]] for more details.<br />
<br />
== External links ==<br />
<br />
[https://www.debian.org/releases/bullseye/amd64/release-notes/ Release Notes for Debian 11.0 (bullseye), 64-bit PC]<br />
<br />
[[Category: HOWTO]][[Category: Installation]]</div>M.almalathttps://pve.proxmox.com/mediawiki/index.php?title=Full_Mesh_Network_for_Ceph_Server&diff=11080Full Mesh Network for Ceph Server2021-05-11T05:54:07Z<p>M.almalat: </p>
<hr />
<div>== Introduction ==<br />
<br />
This wiki page describes how to configure a three node [https://en.wikipedia.org/wiki/Mesh_networking "Meshed Network"] Proxmox VE (or any other Debian based Linux distribution), which can be, for example, used for connecting [[Ceph Server | Ceph Servers]] or nodes in a [[Proxmox VE 4.x Cluster | Proxmox VE Cluster]] with the maximum possible bandwidth and without using a switch. This can also work with bigger clusters, in general you need (number_of_nodes - 1) NIC ports on each node, e.g.: you'd need 4 NIC ports for a 5-node full mesh.<br />
A big advantage of this setup is the fact, that you can achieve a fast network connection with 10, 40, or more GBit/s bandwidth WITHOUT buying an expensive fast-enough network switch.<br />
<br />
There a two possible method to achieve a full mesh:<br />
<br />
# each packet is sent to both other nodes<br />
# each packet is sent to the addressed node only (controlled by routing), therefore multicast is not possible.<br />
<br />
General the first one is recommended, as it is easier to set up and supports multicast. If multicast is not needed on this network, method 2 can provide a more efficient use of the total bandwidth.<br />
<br />
== Example ==<br />
<br />
3 servers:<br />
* Node1 with IP addresses x.x.x.50<br />
* Node2 with IP addresses x.x.x.51<br />
* Node3 with IP addresses x.x.x.52<br />
<br />
3 to 4 Network ports in each server:<br />
* ens18, ens19 will be used for the actual full mesh. Physical direct connections to the other two servers, 10.15.15.y/24<br />
* ens20 connection to WAN (internet/router), using at vmbr0 192.168.2.y<br />
* ens21 (optional) LAN (for cluster traffic, etc.) 10.14.14.y<br />
<br />
Direct connections between servers:<br />
* Node1/ens18 - Node2/ens19<br />
* Node2/ens18 - Node3/ens19<br />
* Node3/ens18 - Node1/ens19<br />
<br />
<pre><br />
+-----------+<br />
| Node1 |<br />
+-----------+<br />
|ens18|ens19|<br />
+--+------+-+<br />
| |<br />
+-----+ | | +-----+<br />
+-------+ens18+-----------+ +-------+ens18+-------+<br />
| Node2 +-----+ | | +-----+ Node3 |<br />
+-------+ens19+--------+ +-----------+ens19+-------+<br />
+-----+ +-----+<br />
</pre><br />
<br />
== Method 1 (broadcast) ==<br />
Create a "broadcast" bond with the given interfaces on every node.<br />
This can be done over the GUI or on the command-line.<br />
<br />
=== GUI ===<br />
On the GUI go to the node level -> System -> Network.<br />
Then click on "Create" and select "Linux Bond".<br />
In the Wizard make your configuration without a gateway and set mode to "broadcast".<br />
<br />
Reboot the node to activate the new network settings.<br />
<br />
=== Command-Line ===<br />
Add the following lines to '/etc/network/interfaces'.<br />
<br />
<pre><br />
auto bond<No><br />
iface bond<No> inet static<br />
address <IP><br />
netmask <Netmask><br />
slaves <Nic1> <Nic2><br />
bond_miimon 100<br />
bond_mode broadcast<br />
#Full Mesh<br />
</pre><br />
<br />
Then start the bond<br />
<pre><br />
ifup bond<No><br />
</pre><br />
<br />
In Node1 of the above described setup example /etc/network/interface will look like as follows:<br />
<br />
<pre><br />
iface lo inet loopback<br />
<br />
iface ens20 inet manual<br />
<br />
auto ens21<br />
iface ens21 inet static<br />
address 10.14.14.50<br />
netmask 255.255.255.0<br />
<br />
<br />
iface ens18 inet manual<br />
<br />
iface ens19 inet manual<br />
<br />
auto bond0<br />
iface bond0 inet static<br />
address 10.15.15.50<br />
netmask 255.255.255.0<br />
slaves ens18 ens19<br />
bond_miimon 100<br />
bond_mode broadcast<br />
<br />
<br />
auto vmbr0<br />
iface vmbr0 inet static<br />
address 192.168.2.50<br />
netmask 255.255.240.0<br />
gateway 192.168.2.1<br />
bridge_ports ens20<br />
bridge_stp off<br />
bridge_fd 0<br />
</pre><br />
<br />
== Method 2 (routed) ==<br />
<br />
Corresponding to the above described setup example the 3 nodes have to be configured as described in the following sections.<br />
<br />
=== Node1 ===<br />
==== /etc/network/interface ====<br />
<pre><br />
auto lo<br />
iface lo inet loopback<br />
<br />
iface ens20 inet manual<br />
<br />
auto ens21<br />
iface ens21 inet static<br />
address 10.14.14.50<br />
netmask 255.255.255.0<br />
<br />
# Connected to Node2 (.51)<br />
auto ens18<br />
iface ens18 inet static<br />
address 10.15.15.50<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.51/32 dev ens18<br />
down ip route del 10.15.15.51/32<br />
<br />
# Connected to Node3 (.52)<br />
auto ens19<br />
iface ens19 inet static<br />
address 10.15.15.50<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.52/32 dev ens19<br />
down ip route del 10.15.15.52/32<br />
<br />
auto vmbr0<br />
iface vmbr0 inet static<br />
address 192.168.2.50<br />
netmask 255.255.240.0<br />
gateway 192.168.2.1<br />
bridge_ports ens20<br />
bridge_stp off<br />
bridge_fd 0<br />
<br />
</pre><br />
<br />
==== route ====<br />
<pre><br />
root@pve-2-50:~# ip route<br />
default via 192.168.2.1 dev vmbr0 onlink <br />
10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.50 <br />
10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.50 <br />
10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.50 <br />
10.15.15.52 dev ens19 scope link <br />
10.15.15.51 dev ens18 scope link <br />
192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.50 <br />
</pre><br />
<br />
=== Node2 ===<br />
==== /etc/network/interface ====<br />
<pre><br />
auto lo<br />
iface lo inet loopback<br />
<br />
iface ens20 inet manual<br />
<br />
auto ens21<br />
iface ens21 inet static<br />
address 10.14.14.51<br />
netmask 255.255.255.0<br />
<br />
# Connected to Node3 (.52)<br />
auto ens18<br />
iface ens18 inet static<br />
address 10.15.15.51<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.52/32 dev ens18<br />
down ip route del 10.15.15.52/32<br />
<br />
# Connected to Node1 (.50)<br />
auto ens19<br />
iface ens19 inet static<br />
address 10.15.15.51<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.50/32 dev ens19<br />
down ip route del 10.15.15.50/32<br />
<br />
auto vmbr0<br />
iface vmbr0 inet static<br />
address 192.168.2.51<br />
netmask 255.255.240.0<br />
gateway 192.168.2.1<br />
bridge_ports ens20<br />
bridge_stp off<br />
bridge_fd 0<br />
<br />
</pre><br />
<br />
==== route ====<br />
<pre><br />
root@pve-2-51:/# ip route<br />
default via 192.168.2.1 dev vmbr0 onlink <br />
10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.51 <br />
10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.51 <br />
10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.51 <br />
10.15.15.52 dev ens18 scope link <br />
10.15.15.50 dev ens19 scope link <br />
192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.51 <br />
</pre><br />
<br />
=== Node3 ===<br />
==== /etc/network/interface ====<br />
<pre><br />
auto lo<br />
iface lo inet loopback<br />
<br />
iface ens20 inet manual<br />
<br />
auto ens21<br />
iface ens21 inet static<br />
address 10.14.14.52<br />
netmask 255.255.255.0<br />
<br />
# Connected to Node1 (.50)<br />
auto ens18<br />
iface ens18 inet static<br />
address 10.15.15.52<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.50/32 dev ens18<br />
down ip route del 10.15.15.50/32<br />
<br />
# Connected to Node2 (.51)<br />
auto ens19<br />
iface ens19 inet static<br />
address 10.15.15.52<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.51/32 dev ens19<br />
down ip route del 10.15.15.51/32<br />
<br />
auto vmbr0<br />
iface vmbr0 inet static<br />
address 192.168.2.52<br />
netmask 255.255.240.0<br />
gateway 192.168.2.1<br />
bridge_ports ens20<br />
bridge_stp off<br />
bridge_fd 0<br />
</pre><br />
<br />
==== route ====<br />
<pre><br />
root@pve-2-52:~# ip route<br />
default via 192.168.2.1 dev vmbr0 onlink <br />
10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.52 <br />
10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.52 <br />
10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.52 <br />
10.15.15.51 dev ens19 scope link <br />
10.15.15.50 dev ens18 scope link <br />
192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.52 <br />
</pre><br />
<br />
[[Category: HOWTO]] [[Category: Cluster]]</div>M.almalathttps://pve.proxmox.com/mediawiki/index.php?title=Full_Mesh_Network_for_Ceph_Server&diff=11079Full Mesh Network for Ceph Server2021-05-11T05:47:34Z<p>M.almalat: </p>
<hr />
<div>== Introduction ==<br />
<br />
This wiki page describes how to configure a three node [https://en.wikipedia.org/wiki/Mesh_networking "Meshed Network"] Proxmox VE (or any other Debian based Linux distribution), which can be, for example, used for connecting [[Ceph Server | Ceph Servers]] or nodes in a [[Proxmox VE 4.x Cluster | Proxmox VE Cluster]] with the maximum possible bandwidth and without using a switch. This can also work with bigger clusters, in general you need (number_of_nodes - 1) NIC ports on each node, e.g.: you'd need 4 NIC ports for a 5-node full mesh.<br />
A big advantage of this setup is the fact, that you can achieve a fast network connection with 10, 40, or more GBit/s bandwidth WITHOUT buying an expensive fast-enough network switch.<br />
<br />
There a two possible method to achieve a full mesh:<br />
<br />
# each packet is sent to both other nodes<br />
# each packet is sent to the addressed node only (controlled by routing), therefore multicast is not possible.<br />
<br />
General the fist one is recommended, as it is easier to set up and supports multicast. If multicast is not needed on this network, method 2 can provide a more efficient use of the total bandwidth.<br />
<br />
== Example ==<br />
<br />
3 servers:<br />
* Node1 with IP addresses x.x.x.50<br />
* Node2 with IP addresses x.x.x.51<br />
* Node3 with IP addresses x.x.x.52<br />
<br />
3 to 4 Network ports in each server:<br />
* ens18, ens19 will be used for the actual full mesh. Physical direct connections to the other two servers, 10.15.15.y/24<br />
* ens20 connection to WAN (internet/router), using at vmbr0 192.168.2.y<br />
* ens21 (optional) LAN (for cluster traffic, etc.) 10.14.14.y<br />
<br />
Direct connections between servers:<br />
* Node1/ens18 - Node2/ens19<br />
* Node2/ens18 - Node3/ens19<br />
* Node3/ens18 - Node1/ens19<br />
<br />
<pre><br />
+-----------+<br />
| Node1 |<br />
+-----------+<br />
|ens18|ens19|<br />
+--+------+-+<br />
| |<br />
+-----+ | | +-----+<br />
+-------+ens18+-----------+ +-------+ens18+-------+<br />
| Node2 +-----+ | | +-----+ Node3 |<br />
+-------+ens19+--------+ +-----------+ens19+-------+<br />
+-----+ +-----+<br />
</pre><br />
<br />
== Method 1 (broadcast) ==<br />
Create a "broadcast" bond with the given interfaces on every node.<br />
This can be done over the GUI or on the command-line.<br />
<br />
=== GUI ===<br />
On the GUI go to the node level -> System -> Network.<br />
Then click on "Create" and select "Linux Bond".<br />
In the Wizard make your configuration without a gateway and set mode to "broadcast".<br />
<br />
Reboot the node to activate the new network settings.<br />
<br />
=== Command-Line ===<br />
Add the following lines to '/etc/network/interfaces'.<br />
<br />
<pre><br />
auto bond<No><br />
iface bond<No> inet static<br />
address <IP><br />
netmask <Netmask><br />
slaves <Nic1> <Nic2><br />
bond_miimon 100<br />
bond_mode broadcast<br />
#Full Mesh<br />
</pre><br />
<br />
Then start the bond<br />
<pre><br />
ifup bond<No><br />
</pre><br />
<br />
In Node1 of the above described setup example /etc/network/interface will look like as follows:<br />
<br />
<pre><br />
iface lo inet loopback<br />
<br />
iface ens20 inet manual<br />
<br />
auto ens21<br />
iface ens21 inet static<br />
address 10.14.14.50<br />
netmask 255.255.255.0<br />
<br />
<br />
iface ens18 inet manual<br />
<br />
iface ens19 inet manual<br />
<br />
auto bond0<br />
iface bond0 inet static<br />
address 10.15.15.50<br />
netmask 255.255.255.0<br />
slaves ens18 ens19<br />
bond_miimon 100<br />
bond_mode broadcast<br />
<br />
<br />
auto vmbr0<br />
iface vmbr0 inet static<br />
address 192.168.2.50<br />
netmask 255.255.240.0<br />
gateway 192.168.2.1<br />
bridge_ports ens20<br />
bridge_stp off<br />
bridge_fd 0<br />
</pre><br />
<br />
== Method 2 (routed) ==<br />
<br />
Corresponding to the above described setup example the 3 nodes have to be configured as described in the following sections.<br />
<br />
=== Node1 ===<br />
==== /etc/network/interface ====<br />
<pre><br />
auto lo<br />
iface lo inet loopback<br />
<br />
iface ens20 inet manual<br />
<br />
auto ens21<br />
iface ens21 inet static<br />
address 10.14.14.50<br />
netmask 255.255.255.0<br />
<br />
# Connected to Node2 (.51)<br />
auto ens18<br />
iface ens18 inet static<br />
address 10.15.15.50<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.51/32 dev ens18<br />
down ip route del 10.15.15.51/32<br />
<br />
# Connected to Node3 (.52)<br />
auto ens19<br />
iface ens19 inet static<br />
address 10.15.15.50<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.52/32 dev ens19<br />
down ip route del 10.15.15.52/32<br />
<br />
auto vmbr0<br />
iface vmbr0 inet static<br />
address 192.168.2.50<br />
netmask 255.255.240.0<br />
gateway 192.168.2.1<br />
bridge_ports ens20<br />
bridge_stp off<br />
bridge_fd 0<br />
<br />
</pre><br />
<br />
==== route ====<br />
<pre><br />
root@pve-2-50:~# ip route<br />
default via 192.168.2.1 dev vmbr0 onlink <br />
10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.50 <br />
10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.50 <br />
10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.50 <br />
10.15.15.52 dev ens19 scope link <br />
10.15.15.51 dev ens18 scope link <br />
192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.50 <br />
</pre><br />
<br />
=== Node2 ===<br />
==== /etc/network/interface ====<br />
<pre><br />
auto lo<br />
iface lo inet loopback<br />
<br />
iface ens20 inet manual<br />
<br />
auto ens21<br />
iface ens21 inet static<br />
address 10.14.14.51<br />
netmask 255.255.255.0<br />
<br />
# Connected to Node3 (.52)<br />
auto ens18<br />
iface ens18 inet static<br />
address 10.15.15.51<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.52/32 dev ens18<br />
down ip route del 10.15.15.52/32<br />
<br />
# Connected to Node1 (.50)<br />
auto ens19<br />
iface ens19 inet static<br />
address 10.15.15.51<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.50/32 dev ens19<br />
down ip route del 10.15.15.50/32<br />
<br />
auto vmbr0<br />
iface vmbr0 inet static<br />
address 192.168.2.51<br />
netmask 255.255.240.0<br />
gateway 192.168.2.1<br />
bridge_ports ens20<br />
bridge_stp off<br />
bridge_fd 0<br />
<br />
</pre><br />
<br />
==== route ====<br />
<pre><br />
root@pve-2-51:/# ip route<br />
default via 192.168.2.1 dev vmbr0 onlink <br />
10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.51 <br />
10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.51 <br />
10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.51 <br />
10.15.15.52 dev ens18 scope link <br />
10.15.15.50 dev ens19 scope link <br />
192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.51 <br />
</pre><br />
<br />
=== Node3 ===<br />
==== /etc/network/interface ====<br />
<pre><br />
auto lo<br />
iface lo inet loopback<br />
<br />
iface ens20 inet manual<br />
<br />
auto ens21<br />
iface ens21 inet static<br />
address 10.14.14.52<br />
netmask 255.255.255.0<br />
<br />
# Connected to Node1 (.50)<br />
auto ens18<br />
iface ens18 inet static<br />
address 10.15.15.52<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.50/32 dev ens18<br />
down ip route del 10.15.15.50/32<br />
<br />
# Connected to Node2 (.51)<br />
auto ens19<br />
iface ens19 inet static<br />
address 10.15.15.52<br />
netmask 255.255.255.0<br />
up ip route add 10.15.15.51/32 dev ens19<br />
down ip route del 10.15.15.51/32<br />
<br />
auto vmbr0<br />
iface vmbr0 inet static<br />
address 192.168.2.52<br />
netmask 255.255.240.0<br />
gateway 192.168.2.1<br />
bridge_ports ens20<br />
bridge_stp off<br />
bridge_fd 0<br />
</pre><br />
<br />
==== route ====<br />
<pre><br />
root@pve-2-52:~# ip route<br />
default via 192.168.2.1 dev vmbr0 onlink <br />
10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.52 <br />
10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.52 <br />
10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.52 <br />
10.15.15.51 dev ens19 scope link <br />
10.15.15.50 dev ens18 scope link <br />
192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.52 <br />
</pre><br />
<br />
[[Category: HOWTO]] [[Category: Cluster]]</div>M.almalathttps://pve.proxmox.com/mediawiki/index.php?title=VNC_Client_Access&diff=10969VNC Client Access2021-01-29T09:17:54Z<p>M.almalat: </p>
<hr />
<div>= Standard Console Access =<br />
<br />
As default Proxmox provides access via [[https://kanaka.github.io/noVNC/noVNC/vnc.html noVNC]] and/or [[SPICE]] and it is recommended to use this access whenever possible. If there are some reasons to have browser independent access it is possible to use an ordinary VNC client.<br />
<br />
= Enabling Access from other VNC clients =<br />
<br />
<br />
It is possible to enable the VNC access for use with usual VNC clients as RealVNC, TightVNC, Remmina etc.<br />
<br />
== Configure VNC Access via Monitor ==<br />
<br />
# Go to VM´s 'Monitor' panel in the web interface.<br />
# You can setup a plain VNC or also a password secured one:<br />
#* for the plain one type the following into the monitor:<br />
#: <pre>change vnc 0.0.0.0:100</pre><br />
#: 100 denotes the port, this will get added to the VNC base port of 5900, so in this case the VNC server listens on all addresses on port 6000.<br />
#* for the password secured one type the following into the monitor:<br />
#: <pre>change vnc 0.0.0.0:100,password&#10;set_password vnc foobar1&#10;expire_password vnc +30</pre><br />
#: note: the first "password" parameter after the IP address mustn't be replaced by a password but is the word "password" itself, this is just a boolean parameter telling QEMU that the server needs a password.<br />
# now you may connect via the ip address and port (6000 in the above example)<br />
<br />
The configuration is not persistent, i.e. after the VM has been stopped and started again the above actions have to be repeated in order to enable VNC access via external client.<br />
<br />
== Configure VNC Access via Configuration File ==<br />
<br />
* Add in the VM´s configuration file /etc/pve/local/qemu-server/<KVM ID>.conf a line which specifies the VNC display number as follows ("77" in the example below):<br />
<br />
args: -vnc 0.0.0.0:77 <br />
<br />
The display number can be freely chosen, but each number must occur only once. VNC service listens then at port 5900+display_number. Note that connections via noVNC start using display number 0 consecutively therefore it´s recommended to use higher numbers in order to avoid conflicts.<br />
<br />
* Connect from VNC client to Proxmox host ip address and port as specified (5977 in the above example)<br />
<br />
Note: Using this method it not possible to set a VNC password. <br />
<br />
<br />
<br />
[[Category: HOWTO]]</div>M.almalathttps://pve.proxmox.com/mediawiki/index.php?title=Nagios_check_mk&diff=10944Nagios check mk2020-12-16T12:46:55Z<p>M.almalat: /* Plugin */</p>
<hr />
<div>This is a check for the Nagios check addon "check_mk" (http://mathias-kettner.de/check_mk.html). The idea is that only VM's that are running while doing the inventory are getting monitored. If a VM is added or removed do "Full Scan"<br />
<br />
The check also collects performance data for %CPU load caused by the VM and % of total memory consumption of the VM as well as it's reserved virtual memory size (VSZ) of the related kvm process.<br />
===Client-Check===<br />
/usr/lib/check_mk_agent/plugins/mh_qemu<br />
<pre><br />
#!/bin/bash<br />
#/usr/lib/check_mk_agent/plugins<br />
<br />
# based upon 'qemu' from<br />
# 12/2010 Matthias Henze<br />
# Lizenz: GPL v2<br />
#<br />
# updated for libvirtd (virsh) by<br />
# Jonathan Mills 09/2011<br />
#<br />
# updated by<br />
# Christian Burmeister 05/2015<br />
# updated by<br />
# adorfer 01/2017 for proxmox 4 pve<br />
# updated by proxmox 07/2019 for proxmox ve 5&6<br />
<br />
if which qm >/dev/null ; then<br />
echo '<<<qemu>>>'<br />
qm list | grep -v VMID | while read L<br />
do<br />
if [[ ! -z $L ]]; then<br />
<br />
ID=$(echo $L | awk '{print $1}')<br />
XNAME=$(echo $L | awk '{$1=$NF=$(NF-1)=$(NF-2)=$(NF-3)="";print $0}')<br />
NAME=`echo $XNAME | sed 's/ /_/g'`<br />
STATE=$(echo $L | awk '{print $(NF-3)}')<br />
PID=$(ps aux | grep kvm | grep "id $ID" | head -1 | tail -1| awk '{print $2}')<br />
if [[ ! -z $PID ]] && [ "$PID" -gt "0" ]; then<br />
PS=$(ps aux | grep kvm | grep $PID | head -1|tail -1)<br />
MEM=$(echo $PS|awk -- '{print $5}')<br />
MEM=$(echo $MEM / 1024 | bc)<br />
DATA=$(top -p $PID -n 1 -b | tail -1)<br />
PCPU=$(echo $DATA | awk -- '{print $9}'|tr , .)<br />
PMEM=$(echo $DATA | awk -- '{print $10}'|tr , .)<br />
MCPU=$(echo $PS | sed 's/.*maxcpus=\([^ ]*\)\ .*/\1/' )<br />
RCPU=$(echo "scale=1; $PCPU / $MCPU"| bc)<br />
else<br />
MEM=""<br />
RCPU=""<br />
PMEM=""<br />
fi<br />
echo $ID" "$NAME" "$STATE" "$MEM" "$RCPU" "$PMEM <br />
fi<br />
done<br />
fi<br />
<br />
</pre><br />
<br />
===Plugin===<br />
/omd/versions/default/share/check_mk/checks/qemu<br />
<br />
The path assumes the use of OMD (Open source Monitoring Distribution).<br />
<pre><br />
#!/usr/bin/python<br />
# -*- encoding: utf-8; py-indent-offset: 4 -*-<br />
<br />
# based upon 'qemu' from<br />
# 12/2010 Matthias Henze<br />
# Lizenz: GPL v2<br />
#<br />
# updated for libvirtd (virsh) by<br />
# Jonathan Mills 09/2011<br />
#<br />
# updated by<br />
# Christian Burmeister 05/2015<br />
<br />
# updated by Proxmox 07/2019<br />
<br />
<br />
# Example output from agent:<br />
# <<<qemu>>><br />
# 4 i-4B9008BE running 2048 4.0 2.7<br />
# 5 i-44F608B6 running 2048 0.0 0.7<br />
<br />
<br />
<br />
<br />
# inventory<br />
def inventory_qemu(checkname, info):<br />
inventory = []<br />
for line in info:<br />
if line[2] == "running": # only VM's running while inventory are monitored !<br />
vm = line[1] # we want to capture hostname, not vm id here<br />
<br />
# Fix annoying OpenStack misnaming of VMs<br />
name = vm.split('-')<br />
if name[0] == "instance":<br />
name[0] = "i"<br />
vm = '-'.join(name)<br />
##<br />
<br />
inventory.append( (vm, None) )<br />
return inventory<br />
<br />
# check<br />
def check_qemu(name, param, info):<br />
<br />
for line in info:<br />
perfdata = []<br />
<br />
vm = line[1]<br />
host = vm.split('-')<br />
if host[0] == "instance":<br />
host[0] = "i"<br />
vm = '-'.join(host)<br />
<br />
if vm == name:<br />
item = line[0]<br />
status = line[2]<br />
assigned_mem = line[3]<br />
<br />
infotext = "%s (id: %s" % (status, item)<br />
<br />
if status == "running":<br />
# 4 i-4B9008BE running 2048 4.0 2.7<br />
if len(line) == 6:<br />
current_cpu = int(round(float(line[4])))<br />
infotext += ", CPU: %s%%" % (current_cpu)<br />
perfdata.append( ( "cpu_%", current_cpu ) )<br />
<br />
current_mem = int(round(float(line[5])))<br />
infotext += ", Memory: (Virtual SiZe: %s MB, used: %s%%" % (assigned_mem ,current_mem)<br />
perfdata.append( ( "memory_current_%", current_mem ) )<br />
perfdata.append( ( "memory__assigned_MB", assigned_mem ) )<br />
<br />
<br />
infotext += "))"<br />
<br />
warn = 80<br />
if current_cpu > warn or current_mem > warn:<br />
return (1, "WARN - status is " + infotext, perfdata)<br />
else:<br />
return (0, "OK - status is " + infotext, perfdata)<br />
else:<br />
infotext += ")"<br />
return (2, "CRITICAL - status is " + infotext, perfdata)<br />
<br />
return (3, "UNKNOWN - VM not found in agent output")<br />
<br />
# declare the check to Check_MK<br />
check_info['qemu'] = \<br />
(check_qemu, "VM %s", 1, inventory_qemu)<br />
<br />
</pre><br />
<br />
[[Category: HOWTO]]</div>M.almalathttps://pve.proxmox.com/mediawiki/index.php?title=PVE-zsync&diff=10943PVE-zsync2020-12-15T14:17:55Z<p>M.almalat: </p>
<hr />
<div>== Introduction ==<br />
With the Proxmox VE ZFS replication manager (pve-zsync) you can synchronize your virtual machine (virtual disks and VM configuration) or directory stored on ZFS between two servers. By synchronizing, you have a full copy of your virtual machine on the second host and you can start your virtual machines on the second server (in case of data loss on the first server).<br />
<br />
By default, the tool syncs every 15 minutes, but the synchronization interval is fully configurable via the integrated cron job setup. The configuration of pve-zsync can be done either on the source server or on the target server.<br />
<br />
This is useful for advanced backup strategies.<br />
<br />
'''Note:''' pve-zsync was introduced in Proxmox VE 3.4 as technology preview. The package can also be installed on plain Debian Wheezy, Jessie or Stretch servers, as long as ZFS is configured.<br />
<br />
'''Note:''' Our ZFS is configured to auto-mount all subvols, so keep this in mind if you use pve-zsync. Also, zvol will be scanned by lvm.<br />
<br />
== Main features ==<br />
*Speed limiter<br />
*Syncing interval can be set by cron<br />
*Syncing VM (disks and config) but also ZFS Datasets<br />
*Can keep multiple backups <br />
*Can be used in both directions <br />
*Can send on local host<br />
*Traffic is encrypted<br />
<br />
== Limitations ==<br />
*not possible to sync recursive<br />
*only ssh for transfer<br />
*email notification is done by cron<br />
*Name resolution not taken into account, you have to use IP Addresses, even using hosts file hostnames will not work<br />
<br />
== System requirements ==<br />
*Both, target and source server must support ZFS (best practice: use Proxmox VE hosts). <br />
*SSH must be installed and configured<br />
*to receive email-notifications, a working mail server is required (e.g. postfix).<br />
*cstream<br />
*perl <br />
*scp<br />
*JASON.pm (libjson-perl this is the Debian package name, e.g. needed on Wheezy)<br />
<br />
== PVE Storage Replication and PVE-zsync ==<br />
<br />
PVE Storage Replication and PVE-zsync work completely independent and do not harm each other, as long the following requirements are followed:<br />
<br />
* The destination pool / subset are different.<br />
* You do not migrate a guest to an other node.<br />
<br />
Sum up of the differences:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Characteristic !! PVE Storage Replication !! PVE-zsync <br />
|-<br />
| Replication || Cluster Wide || to every Node which meets the requirements <br />
|-<br />
| Operation mode || push || push or pull<br />
|-<br />
| Management || GUI and Command line || Command line only <br />
|-<br />
| Keep snapshot || no || yes <br />
|-<br />
| Migration || yes || no <br />
|-<br />
| Main Goal || redundancy || offsite backup <br />
|}<br />
<br />
=== Migrate form pve-zsync to Storage Replication ===<br />
<br />
When you have a cluster and would like to switch to Storage Replication, you have to destroy the pve-zsync job, clean up the storage and create a new Storage Replication job.<br />
<br />
pve-zsync destroy <vmid> [--jobname <test>]<br />
<br />
Then you have to cleanup the zfs storage.<br />
<br />
Start at the source side and remove all snapshots which start with '@rep_'<br />
<br />
zfs destroy <pool>/[<path>/]vm-<VMID>-<type>-<number>@<rep_snapshots> <br />
<br />
Then destroy all guest datasets on the destination side.<br />
<br />
zfs destroy -R vm-<vmid>-*-<DiskNO><br />
<br />
Now you can create a new Storage Replication job<br />
More information about see [[Storage Replication]]<br />
<br />
== Configuration and use ==<br />
<br />
Install the package with apt, on your Proxmox VE host:<br />
<br />
apt-get install pve-zsync<br />
<br />
This tool need basically no configuration. On the the first usage, when you create a job with a unknown host, the tool will ask you about the password for the remote server.<br />
<br />
=== Sync a VM or ZFS dataset one time ===<br />
(N.B. this is also possible if a recurring job for that VM already exists, here you must have in mind that the naming in --name must be the same).<br />
<br />
root@zfs1:~# pve-zsync sync --source 100 --dest 192.168.1.2:tank/backup --verbose --maxsnap 2 --name test1 --limit 512<br />
<br />
This command sync VM 100 witch is located on the server where the tool is called and sent it to the server 192.168.1.2 on the zpool tank which has an subset backup. The --maxsnap tells that it should be kept 2 backups, if there are more then 2 backups, the 3 one will be erased (sorted by creation time). Name is only needed if there is already a sync job.<br />
The --limit param sets the speed limit what is used for syncing, here it would be 512 KBytes/s.<br />
<br />
=== Create a recurring sync job ===<br />
<br />
root@zfs2:~# pve-zsync create --source 192.168.1.1:100 --dest tank/backup --verbose --maxsnap 2 --name test1 --limit 512 --skip<br />
<br />
The --skip parameter disables the initial sync, that normally would be done immediately but can take a while, depending on the size of the backup. The initial sync will be done at the first sync time.<br />
<br />
=== Delete a recurring sync job ===<br />
if you delete a job, the former backup data will not be erased only the config will be erased.<br />
<br />
root@zfs2:~# pve-zsync destroy --source 192.168.1.1:100 --name test1<br />
<br />
Name is not necessary if it is default.<br />
<br />
=== Pause a sync job ===<br />
If you want to pause a job, say maintenance the source server!<br />
<br />
root@zfs2:~# pve-zsync disable --source 192.168.1.1:100 --name test1<br />
<br />
=== Reactivate a sync job ===<br />
to reactivate a job, because the job was paused or the job failed use. <br />
<br />
root@zfs2:~# pve-zsync enable --source 192.168.1.1:100 --name test1<br />
<br />
This will reset the error flag in case of failure.<br />
<br />
=== Changing parameters ===<br />
You can edit the configuration in /etc/cron.d/pve-zsync or destroy the job and create it new.<br />
<br />
=== Information about the jobs ===<br />
To get a overview about all jobs use:<br />
<br />
<pre><br />
root@zfs94:~# pve-zsync list<br />
SOURCE NAME STATE LAST SYNC TYPE <br />
192.168.15.95:100 testing1 ok 2015-05-13_14:44:00 ssh <br />
192.168.15.95:data/test1 testing1 syncing2015-05-13_14:44:11 ssh <br />
<br />
root@zfs94:~# pve-zsync status<br />
SOURCE NAME STATUS <br />
192.168.15.95:100 testing1 ok <br />
192.168.15.95:data/test1 testing1 syncing <br />
</pre><br />
<br />
=== Recovering an VM ===<br />
You must recover the VM or dataset manually. (In one of the upcoming releases, the restore for Proxmox VE VM will be integrated).<br />
<br />
First, stop the sync job for the VM or dataset in question.<br />
<br />
<b>NOTE:</b> if not you can be interfere the sync job or your snapshot will removed before you are able to send it. <br />
<pre><br />
root@zfs2:~# pve-zsync disable --source 192.168.15.1:100 --name test<br />
root@zfs2:~# pve-zsync list<br />
SOURCE NAME STATE LAST SYNC TYPE <br />
192.168.15.1:100 test stopped2015-06-12_11:03:01 ssh <br />
</pre><br />
<br />
Then you can send the VM or Dataset to the selected target. SSH is only needed if you send to a remote sever.<br />
<pre><br />
zfs send <pool>/[<path>/]vm-<VMID>-disk-<number>@<last_snapshot> | [ssh root@<destination>] zfs receive <pool>/<path>/vm-<VMID>-disk-<number><br />
</pre><br />
<br />
If you have a VM you must also copy the config and you need to correct the virtual disk storage configuration accordingly.<br />
<pre><br />
cp /var/lib/pve-zsync/<VMID>.conf.rep_<JOB_NAME><VMID>_<TIMESTAMP> /etc/pve/qemu-server/<VMID>.conf<br />
</pre><br />
<br />
'''NOTE:''' On PVE 4.2+ the path is: <tt>/var/lib/pve-zsync/data/<VMID>.conf.rep_<JOB_NAME><VMID>_<TIMESTAMP></tt><br />
<br />
Example restore VM 100 with 2 disk from 192.168.15.2 (pve2) to 192.168.15.1 (pve1) and change VMID from 100 to 200:<br />
<pre><br />
root@zfs2:~# zfs send rpool/backup/vm-100-disk-1@rep_test100_2015-06-12_11:03:01 | ssh root@192.168.15.1 zfs receive vm/vm-200-disk-1<br />
root@zfs2:~# ssh root@192.168.15.1<br />
root@zfs1:~# cp /var/lib/pve-zsync/100.conf.rep_test100_2015-06-11_14:11:01 /etc/pve/qemu-server/200.conf<br />
root@zfs1:~# nano /etc/pve/qemu-server/200.conf<br />
</pre><br />
<br />
Now you have to change the storage path:<br />
<br />
<pre><br />
bootdisk: virtio0<br />
cores: 1<br />
memory: 512<br />
name: Debian8min<br />
net0: virtio=12:5E:F6:59:A9:BB,bridge=vmbr0<br />
numa: 0<br />
ostype: l26<br />
smbios1: uuid=11fa2fba-5670-4610-aabb-534ad7edeffe<br />
sockets: 1<br />
virtio0: zfs:vm-100-disk-1,size=10G<br />
virtio1: zfs:vm-100-disk-2,size=10G<br />
</pre><br />
<br />
to <br />
<br />
<pre><br />
bootdisk: virtio0<br />
cores: 1<br />
memory: 512<br />
name: Debian8min<br />
net0: virtio=12:5E:F6:59:A9:BB,bridge=vmbr0<br />
numa: 0<br />
ostype: l26<br />
smbios1: uuid=11fa2fba-5670-4610-aabb-534ad7edeffe<br />
sockets: 1<br />
virtio0: vm:vm-200-disk-1,size=10G<br />
virtio1: vm:vm-200-disk-2,size=10G<br />
</pre><br />
<br />
== Troubelshooting ==<br />
Don't forget the command only works with IP Addresses and not hostnames.<br />
<br />
<br />
===Job status is on error and data-set can't be erased on destination system===<br />
<br />
If you have problems with a sync-job and when you try to erase the destination zvol,<br />
you get the error "zfs data-set is busy", then lvm could be the problem.<br />
<br />
This can be occur if you sync zvols with a lvm on it.<br />
<br />
In this case please insert the following line in /etc/lvm/lvm.conf<br />
<br />
filter = [ "r|/dev/zd*|" ]<br />
<br />
and reboot the system.<br />
<br />
tbd.<br />
<br />
== Video Tutorials ==<br />
* [http://www.youtube.com/user/ProxmoxVE Proxmox VE Youtube channel]<br />
==tips==<br />
*as of 2016-11 If you migrate a vm that is used by pve-zsync <br />
:edit /etc/cron,d/pve-zsync and change the IP address for vm . <br />
:run 'pve-zsync enable --source ____ --name ____ ' or else there will be a warning when the job runs.<br />
[[Category:HOWTO]] [[Category: Installation]]</div>M.almalathttps://pve.proxmox.com/mediawiki/index.php?title=Translations&diff=10807Translations2020-08-04T10:03:46Z<p>M.almalat: /* Version 6.x */</p>
<hr />
<div>== Version 6.x ==<br />
<br />
Language files are located at git repository:<br />
[https://git.proxmox.com/?p=proxmox-i18n.git;a=tree git.proxmox.com pve-i18n]<br />
<br />
* You can create new translation with (replace <LANG> with the language ID)<br />
<br />
# cd proxmox-i18n<br />
# make init-<LANG>.po<br />
<br />
* Po file need to be translated to .js file by invoking script located at same directory:<br />
<pre>./po2js.pl -t pve xx.po >pve-lang-xx.js</pre><br />
<br />
* You need to have installed some perl packages in your system. For Ubuntu:<br />
<pre>apt-get install perl liblocale-po-perl libjson-perl</pre><br />
<br />
* Resulted file '''pve-lang-xx.js''' copy to directory '''/usr/share/pve-i18n''' on your proxmox server.<br />
<br />
* You can send finished translation to proxmox team to office(at)proxmox.com with signed contributor licence agreement. See [[Developer Documentation]]<br />
<br />
Proxmox VE 6.x management interface is available in the following languages.<br />
<br />
* Arabic (ar)<br />
* Catalan (ca)<br />
* Danish (da)<br />
* German (de)<br />
* English (en)<br />
* Spanish (es)<br />
* Euskera (Basque) (eu)<br />
* Persian (Farsi) (fa)<br />
* French (fr)<br />
* Hebrew (he)<br />
* Italian (it)<br />
* Japanese (ja)<br />
* Norwegian (Bokmal) (nb)<br />
* Norwegian (Nynorsk) (nn)<br />
* Polish (pl)<br />
* Portuguese (Brazil) (pt_BR)<br />
* Russian (ru)<br />
* Slovenian (sl)<br />
* Swedish (sv)<br />
* Turkish (tr)<br />
* Chinese (Simplified) (zh_CN)<br />
* Chinese (Traditional) (zh_TW)<br />
<br />
Note that most of above translations are contributed by users and English is the original language the interface is written in.<br />
Our contributors try their best to translate it in the languages they know themselves, thus, at time of writing, German, Italian, Turkish and French have good translations.<br />
<br />
== How to translate into different languages ==<br />
The translations can be done by using [http://www.poedit.net/ poedit], an easy to use editor or any other editor you prefer, e.g. Emacs. There is no programming knowledge necessary to translate. If you want to translate, just contact office(at)proxmox.com.<br />
<br />
== PO files ==<br />
All released translation file are available. All maintainers can pick these file if the translation needs rework.<br />
<br />
* [ftp://download.proxmox.com/sources/po-files/ Proxmox VE translations]<br />
<br />
[[Category: HOWTO]]</div>M.almalat