From Bryan at bryanfields.net Sun Feb 2 05:47:12 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Sat, 1 Feb 2020 23:47:12 -0500 Subject: [PVE-User] Firewall Filter ICMP types and connection limiting Message-ID: <4aaf9c61-9d01-ad9d-0544-d8ec1f3a1580@bryanfields.net> greetings, I'm a user of classical KVM on Linux and have recently started to work with Proxmox on two nodes in my rack. I have started to work with the firewall and I normally did a firewall on my hypervisor using /etc/network/interfaces calling /etc/network/firewall.sh which is a bash script of iptables. This would filter both forwarded traffic and traffic to the linux hypervisor. In proxmox things are a bit different (it's still iptables/ip6tables), and I'm attempting to use it the proxmox way by creating a security group and applying that to the VM and the hypervisor. I have a policy in iptables for forwared traffic below : iptables -t filter -A INPUT -j ACCEPT --in-interface $INET_IF --protocol \ icmp --icmp-type echo-request --match limit --limit 4/s --limit-burst 3 iptables -t filter -A INPUT -j log-and-drop --in-interface $INET_IF \ --protocol icmp --icmp-type echo-request I've attempted to set this up in the gui, but there's no option to add the ICMP type, only IP type, and nothing for the match option. If I add this in the config file, it's deleted upon the next time I look at it. I'm thinking surely there must be a way to include it, as blocking ICMP totally will break things. I've read the wiki and install guide, and can't really find any place to set this up at. Thanks, -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From smr at kmi.com Sun Feb 2 23:11:09 2020 From: smr at kmi.com (Stefan M. Radman) Date: Sun, 2 Feb 2020 22:11:09 +0000 Subject: [PVE-User] PVE 6.1 incorrect MTU In-Reply-To: <1F78EE50-FE84-4A13-9E26-A422A163324D@kmi.com> References: <51a40f45-52ad-68e8-6e02-d3cf84ac778f@aasen.cx> <1F78EE50-FE84-4A13-9E26-A422A163324D@kmi.com> Message-ID: <7EFD53C1-7E77-465E-AB43-23472EFB81BD@kmi.com> Hi Ronny The issue was the mix of MTUs of vmbr0 (MTU 1500) and its member port bond0 (MTU 9000). None of the other bridges with tagged members of bond0 had any issue. My tests revealed that bond0 and its slaves eno1 and eno2 had an MTU of 9000 before vmbr0 was initialized but an MTU of 1500 after vmbr0 had been initialized (post-up). So you were right that "setting mtu 1500 on vmbr0 may propagate to member interfaces". It even trickles down to the bond slaves of a member port. The solution is a post-up script on the vmbr0 interface, setting a different MTU on bond0 and vmbr0. post-up ip link set dev bond0 mtu 9000 && ip link set dev vmbr0 mtu 1500 That leads to the expected result, so no need to tag the native VLAN. root at pve61:~# ip link show | egrep ': (eno1|eno2|bond0|vmbr0):' 2: eno1: mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 3: eno2: mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 6: bond0: mtu 9000 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000 7: vmbr0: mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 See the final configuration below. The only change is the post-up script on vmbr0. The other vmbr interfaces originally shown were just omitted because none of them had an issue. Thanks for pointing me into the right direction. Cheers Stefan iface eno1 inet manual mtu 9000 #Gb1 - Trunk - Jumbo Frames iface eno2 inet manual mtu 9000 #Gb2 - Trunk - Jumbo Frames auto bond0 iface bond0 inet manual slaves eno1 eno2 bond_miimon 100 bond_mode active-backup mtu 9000 #HA Bundle Gb1/Gb2 - Trunk - Jumbo Frames auto vmbr0 iface vmbr0 inet static address 172.21.54.204 netmask 255.255.255.0 gateway 172.21.54.254 bridge_ports bond0 bridge_stp off bridge_fd 0 mtu 1500 post-up ip link set dev bond0 mtu 9000 && ip link set dev vmbr0 mtu 1500 #PRIVATE - VLAN 682 - Native On Jan 22, 2020, at 01:12, Stefan M. Radman > wrote: Hi Ronny Thanks for the input. setting mtu 1500 on vmbr0 may propagate to member interfaces, in more recent kernels. I belive member ports need to have the same mtu as the bridge Hmm. That might be the point with the native bond0 interface. Can you refer me to the place where this is documented or at least discussed? Maybe I can find a configuration item to switch this enforcement off (and configure the MTU manually). It seems that I'll have to do some more testing to find out at which point the MTUs change (or don't). Thanks Stefan On Jan 21, 2020, at 09:17, Ronny Aasen > wrote: I do not know... But i am interested in this case since i have a very similar config on some of my own clusters, and am planning upgrades. Hoppe you can post your solution when you find it. I am guesstimating that setting mtu 1500 on vmbr0 may propagate to member interfaces, in more recent kernels. I belive member ports need to have the same mtu as the bridge, but probably not activly enforced in previous kernels. Personaly i never use native unless it is a phone or accesspoint on the end of a cable. So in my config vmbr0 is a vlan that is vlan-aware without any ip addresses on. with mtu 9000 and bond0 as member. and my vlans are vmbr0.686 instead of bond0.686 only defined for vlans where proxmox need an ip mtu 1500 and 9000 depending on needs. my vm's are attached to vmbr0 with a tag in the config. This way I do not have to edit proxmox config to take a new vlan in use. and all mtu 1500 stancas go on vlan interfaces, and not on bridges, and that probably do not propagate the same way. In your shoes i would try to TAG the native, and move vmbr0 to vmbr682 on bond0.682. This mean you need to change the vm's using vmbr0 tho. So if you have lots of vm's attached to vmbr0 perhaps just TAG and make the member port bond0.682 to avoid changing vm configs. this should make a bond0.682 vlan as the member port and hopefully allow the mtu. disclaimer: just wild guesses, i have not tested this on 6.1. good luck Ronny Aasen CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From smr at kmi.com Sun Feb 2 23:13:03 2020 From: smr at kmi.com (Stefan M. Radman) Date: Sun, 2 Feb 2020 22:13:03 +0000 Subject: [PVE-User] LVM autoactivation failed with multipath over iSCSI In-Reply-To: References: <413ca3933a616bd30233b8fefb305154@verdnatura.es> <20200114132801.GC2777@sv.lnf.it> <20200114151223.GE2777@sv.lnf.it> <389b6180fc79c96727ffb9376b1a1555@verdnatura.es> <5b545e139718261531562fec0a49eede@verdnatura.es> <56518B49-6608-4161-8424-5088F92853BE@kmi.com> Message-ID: <4C4C3D36-0A76-43CB-B802-1FFCFEECE548@kmi.com> After upgrading a node that uses iSCSI multipath to 6.1, auto activation of LVM volumes fails for me as well. I am seeing exactly the same problem as originally reported by Nada. Apparently, LVM scans the iSCSI adapters before multipath is fully loaded and consequently ignores volumes appearing on duplicate devices. Marco Gaiarin correctly noted this in January. See the error messages in the system log below. Does anyone have a hint, how to delay the lvm-pvscan service until multipath has finished its discovery? Thanks Stefan Feb 02 22:31:25 proxmox kernel: device-mapper: multipath service-time: version 0.3.0 loaded Feb 02 22:31:25 proxmox multipathd[1384]: 36000402002fc66d571e4e4cb00000000: load table [0 9766420480 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 1 1 8:64 1] Feb 02 22:31:25 proxmox multipathd[1384]: sde [8:64]: path added to devmap 36000402002fc66d571e4e4cb00000000 Feb 02 22:31:25 proxmox kernel: sd 6:0:0:1: alua: port group 07 state N non-preferred supports tolusNA Feb 02 22:31:25 proxmox kernel: sd 6:0:0:1: alua: port group 07 state N non-preferred supports tolusNA Feb 02 22:31:25 proxmox systemd[1]: Starting LVM event activation on device 253:6... Feb 02 22:31:26 proxmox iscsid[1946]: Connection1:0 to [target: iqn.1999-02.com.nexsan:p1:satabeast2:02a166d5, portal: 192.168.86.11,3260] through [iface: default] is operational now Feb 02 22:31:26 proxmox iscsid[1946]: Connection2:0 to [target: iqn.1999-02.com.nexsan:p0:satabeast2:02a166d5, portal: 192.168.86.10,3260] through [iface: default] is operational now Feb 02 22:31:26 proxmox iscsid[1946]: Connection3:0 to [target: iqn.1999-02.com.nexsan:p3:satabeast2:02a166d5, portal: 192.168.86.21,3260] through [iface: default] is operational now Feb 02 22:31:26 proxmox iscsid[1946]: Connection4:0 to [target: iqn.1999-02.com.nexsan:p2:satabeast2:02a166d5, portal: 192.168.86.20,3260] through [iface: default] is operational now Feb 02 22:31:26 proxmox lvm[2324]: WARNING: Not using device /dev/sdb for PV EJCaTd-v9EY-vw8B-Bja1-bWdI-22Hf-1RL9cW. Feb 02 22:31:26 proxmox lvm[2324]: WARNING: Not using device /dev/sdc for PV EJCaTd-v9EY-vw8B-Bja1-bWdI-22Hf-1RL9cW. Feb 02 22:31:26 proxmox lvm[2324]: WARNING: Not using device /dev/sdd for PV EJCaTd-v9EY-vw8B-Bja1-bWdI-22Hf-1RL9cW. Feb 02 22:31:26 proxmox lvm[2324]: WARNING: PV EJCaTd-v9EY-vw8B-Bja1-bWdI-22Hf-1RL9cW prefers device /dev/mapper/36000402002fc66d571e4e4cb00000000 because device is in dm subsystem. Feb 02 22:31:26 proxmox lvm[2324]: Cannot activate LVs in VG san-vol1 while PVs appear on duplicate devices. Feb 02 22:31:26 proxmox lvm[2324]: Cannot activate LVs in VG san-vol1 while PVs appear on duplicate devices. Feb 02 22:31:26 proxmox lvm[2324]: Cannot activate LVs in VG san-vol1 while PVs appear on duplicate devices. Feb 02 22:31:26 proxmox lvm[2324]: 0 logical volume(s) in volume group "san-vol1" now active Feb 02 22:31:26 proxmox lvm[2324]: san-vol1: autoactivation failed. Feb 02 22:31:26 proxmox systemd[1]: lvm2-pvscan at 253:6.service: Main process exited, code=exited, status=5/NOTINSTALLED Feb 02 22:31:26 proxmox systemd[1]: lvm2-pvscan at 253:6.service: Failed with result 'exit-code'. Feb 02 22:31:26 proxmox systemd[1]: Failed to start LVM event activation on device 253:6. Feb 02 22:31:26 proxmox multipathd[1384]: 36000402002fc66d571e4e4cb00000000: performing delayed actions Feb 02 22:31:26 proxmox multipathd[1384]: 36000402002fc66d571e4e4cb00000000: load table [0 9766420480 multipath 1 queue_if_no_path 1 alua 2 1 service-time 0 2 1 8:32 1 8:16 1 service-time 0 2 1 8:64 1 Feb 02 22:31:26 proxmox kernel: sd 3:0:0:1: alua: port group 04 state A preferred supports tolusNA Feb 02 22:31:26 proxmox kernel: sd 4:0:0:1: alua: port group 03 state A preferred supports tolusNA Feb 02 22:31:26 proxmox kernel: sd 3:0:0:1: alua: port group 04 state A preferred supports tolusNA Feb 02 22:31:26 proxmox kernel: sd 4:0:0:1: alua: port group 03 state A preferred supports tolusNA CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From smr at kmi.com Mon Feb 3 10:53:36 2020 From: smr at kmi.com (Stefan M. Radman) Date: Mon, 3 Feb 2020 09:53:36 +0000 Subject: [PVE-User] LVM autoactivation failed with multipath over iSCSI In-Reply-To: <202541d69d0b3a71b6ee87d103c7a84c@verdnatura.es> References: <413ca3933a616bd30233b8fefb305154@verdnatura.es> <20200114132801.GC2777@sv.lnf.it> <20200114151223.GE2777@sv.lnf.it> <389b6180fc79c96727ffb9376b1a1555@verdnatura.es> <5b545e139718261531562fec0a49eede@verdnatura.es> <56518B49-6608-4161-8424-5088F92853BE@kmi.com> <4C4C3D36-0A76-43CB-B802-1FFCFEECE548@kmi.com> <202541d69d0b3a71b6ee87d103c7a84c@verdnatura.es> Message-ID: <344C7FA0-DF59-4A5D-800C-B29C27AEBFB2@kmi.com> Hi Nada SUSE document #7023336 seems to describe the race condition we are seeing but I'm not sure the solution they are offering will help in my case. The boot parameter rd.lvm.conf=0 just removes /etc/lvm.conf from the initramfs. My lvm.conf is unchanged (system default) so removing it from initramfs probably won't change anything I believe. Nevertheless I'm curious about the result of your test. My own workaround for the issue at hand is to delay the start of lvm2-pvscan by a few seconds (see below). That gives multipathd more than enough time to complete its job. It solves the problem for me (at least until the next upgrade) but there might be a smarter solution. Stefan root at proxmox:~# sed -i.orig '/^ExecStart=/i ExecStartPre=/bin/sleep 5' /lib/systemd/system/lvm2-pvscan at .service root at proxmox:~# systemctl daemon-reload root at proxmox:~# diff -u /lib/systemd/system/lvm2-pvscan at .service{.orig,} --- /lib/systemd/system/lvm2-pvscan at .service.orig 2019-07-15 15:11:18.000000000 +0200 +++ /lib/systemd/system/lvm2-pvscan at .service 2020-02-03 01:19:38.440840599 +0100 @@ -10,5 +10,6 @@ [Service] Type=oneshot RemainAfterExit=yes +ExecStartPre=/bin/sleep 5 ExecStart=/sbin/lvm pvscan --cache --activate ay %i ExecStop=/sbin/lvm pvscan --cache %i On Feb 3, 2020, at 09:42, nada > wrote: hi Stefan some 'racing' problem is described and solved here https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.suse.com%2Fsupport%2Fkb%2Fdoc%2F%3Fid%3D7023336&data=02%7C01%7Csmr%40kmi.com%7Cfb29a75f1c3e4ff35a2b08d7a885142c%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637163161847328253&sdata=xFGQYGGweOixVqVLw5DwQ81n3lgNWYzXjJATkVrfRZQ%3D&reserved=0 PLS read it maybe i have time to test it during next weekend Nada On 2020-02-02 23:13, Stefan M. Radman wrote: After upgrading a node that uses iSCSI multipath to 6.1, auto activation of LVM volumes fails for me as well. I am seeing exactly the same problem as originally reported by Nada. Apparently, LVM scans the iSCSI adapters before multipath is fully loaded and consequently ignores volumes appearing on duplicate devices. Marco Gaiarin correctly noted this in January. See the error messages in the system log below. Does anyone have a hint, how to delay the lvm-pvscan service until multipath has finished its discovery? Thanks Stefan CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From gaio at sv.lnf.it Mon Feb 3 11:20:41 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Mon, 3 Feb 2020 11:20:41 +0100 Subject: [PVE-User] LVM autoactivation failed with multipath over iSCSI In-Reply-To: References: <20200114151223.GE2777@sv.lnf.it> <389b6180fc79c96727ffb9376b1a1555@verdnatura.es> <5b545e139718261531562fec0a49eede@verdnatura.es> <56518B49-6608-4161-8424-5088F92853BE@kmi.com> <4C4C3D36-0A76-43CB-B802-1FFCFEECE548@kmi.com> <202541d69d0b3a71b6ee87d103c7a84c@verdnatura.es> Message-ID: <20200203102041.GC4064@sv.lnf.it> Mandi! Stefan M. Radman via pve-user In chel di` si favelave... > My own workaround for the issue at hand is to delay the start of lvm2-pvscan by a few seconds (see below). > That gives multipathd more than enough time to complete its job. > It solves the problem for me (at least until the next upgrade) but there might be a smarter solution. Would be better to make 'lvm2-pvscan' service depends on 'multipathd'? Something like (NOT TESTED!) adding in: /etc/systemd/system/lvm2-pvscan.service.d/wait-multipath.conf the rows: [Unit] After=multipathd.service Wants=multipathd.service -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From smr at kmi.com Mon Feb 3 11:46:45 2020 From: smr at kmi.com (Stefan M. Radman) Date: Mon, 3 Feb 2020 10:46:45 +0000 Subject: [PVE-User] LVM autoactivation failed with multipath over iSCSI In-Reply-To: <20200203102041.GC4064@sv.lnf.it> References: <20200114151223.GE2777@sv.lnf.it> <389b6180fc79c96727ffb9376b1a1555@verdnatura.es> <5b545e139718261531562fec0a49eede@verdnatura.es> <56518B49-6608-4161-8424-5088F92853BE@kmi.com> <4C4C3D36-0A76-43CB-B802-1FFCFEECE548@kmi.com> <202541d69d0b3a71b6ee87d103c7a84c@verdnatura.es> <20200203102041.GC4064@sv.lnf.it> Message-ID: Hi Marco Thanks for the suggestion. I tried putting After=multipathd.service into lvm2-pvescan.service but that did not help. It just started the multipathd.service before lvm2-pvescan.service but did not wait for discovery to finish. That lvm2-pvscan.service Wants=multipathd.service is probably not really true because lvm2-pvscan.service will happily run without multipathd.service. Creating such a dependency is usually not a good idea and it is not needed because the multipathd.service does start. In such a case After= should be enough but isn't in this case (see above). It seems to be a timing problem after all. Stefan > On Feb 3, 2020, at 11:20, Marco Gaiarin wrote: > > Mandi! Stefan M. Radman via pve-user > In chel di` si favelave... > >> My own workaround for the issue at hand is to delay the start of lvm2-pvscan by a few seconds (see below). >> That gives multipathd more than enough time to complete its job. >> It solves the problem for me (at least until the next upgrade) but there might be a smarter solution. > > Would be better to make 'lvm2-pvscan' service depends on 'multipathd'? > Something like (NOT TESTED!) adding in: > > /etc/systemd/system/lvm2-pvscan.service.d/wait-multipath.conf > > the rows: > [Unit] > After=multipathd.service > Wants=multipathd.service > > -- > dott. Marco Gaiarin GNUPG Key ID: 240A3D66 > Associazione ``La Nostra Famiglia'' https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.lanostrafamiglia.it%2F&data=02%7C01%7Csmr%40kmi.com%7C514880708d424324a25608d7a892cfad%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637163220838442358&sdata=F9DvF%2B1O5WHXn89rACg7BmlSqaVR3ysx9aVLSZPoOX4%3D&reserved=0 > Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) > marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 > > Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! > https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.lanostrafamiglia.it%2Findex.php%2Fit%2Fsostienici%2F5x1000&data=02%7C01%7Csmr%40kmi.com%7C514880708d424324a25608d7a892cfad%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637163220838442358&sdata=wBdJSGLQTXWbjunsWHatsVqaAtEj80Zik5dOWvUelhg%3D&reserved=0 > (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpve.proxmox.com%2Fcgi-bin%2Fmailman%2Flistinfo%2Fpve-user&data=02%7C01%7Csmr%40kmi.com%7C514880708d424324a25608d7a892cfad%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637163220838442358&sdata=DhEWzJ4BAWQkYsB0VctfAYGf6P6XX%2B508GLSSQCcWW8%3D&reserved=0 CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From gaio at sv.lnf.it Mon Feb 3 12:42:33 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Mon, 3 Feb 2020 12:42:33 +0100 Subject: [PVE-User] PVE6 and PCI(e) Passthrough... Message-ID: <20200203114233.GI4064@sv.lnf.it> I've done a fresh installation of PVE6 (6.1) on an old HP ProLiant ML110 G6 (CPU Intel(R) Xeon(R) CPU X3430 at 2.40GHz), and following: https://pve.proxmox.com/wiki/PCI(e)_Passthrough I've tried to enable PCI passthrough, that seems work: root at ino:~# grep -i iommu /var/log/kern.log Feb 2 14:27:38 ino kernel: [ 0.000000] Command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-5.3.13-2-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on Feb 2 14:27:38 ino kernel: [ 0.075004] Kernel command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-5.3.13-2-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on Feb 2 14:27:38 ino kernel: [ 0.075119] DMAR: IOMMU enabled but still PVE web interface refuse me to put in passthrough a device (say: iommu not enabled) and the box reboot spontaneously under load (i'm copying files...) every half an hour, roughly. I'm missing something? Or simply my hardware is not supported? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From leesteken at pm.me Mon Feb 3 14:28:01 2020 From: leesteken at pm.me (leesteken at pm.me) Date: Mon, 03 Feb 2020 13:28:01 +0000 Subject: [PVE-User] PVE6 and PCI(e) Passthrough... In-Reply-To: <20200203114233.GI4064@sv.lnf.it> References: <20200203114233.GI4064@sv.lnf.it> Message-ID: On Monday, February 3, 2020 12:42 PM, Marco Gaiarin wrote: > > I've done a fresh installation of PVE6 (6.1) on an old HP ProLiant ML110 > G6 (CPU Intel(R) Xeon(R) CPU X3430 at 2.40GHz), and following: > > https://pve.proxmox.com/wiki/PCI(e)_Passthrough > > I've tried to enable PCI passthrough, that seems work: > > root at ino:~# grep -i iommu /var/log/kern.log > Feb 2 14:27:38 ino kernel: [ 0.000000] Command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-5.3.13-2-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on > Feb 2 14:27:38 ino kernel: [ 0.075004] Kernel command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-5.3.13-2-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on > Feb 2 14:27:38 ino kernel: [ 0.075119] DMAR: IOMMU enabled > > but still PVE web interface refuse me to put in passthrough a device > (say: iommu not enabled) and the box reboot spontaneously under load > (i'm copying files...) every half an hour, roughly. > > I'm missing something? Or simply my hardware is not supported? Can you post the output from the following command? find /sys/kernel/iommu_groups/ -type l kind regards, Arjen From gaio at sv.lnf.it Mon Feb 3 16:11:10 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Mon, 3 Feb 2020 16:11:10 +0100 Subject: [PVE-User] PVE6 and PCI(e) Passthrough... In-Reply-To: References: <20200203114233.GI4064@sv.lnf.it> Message-ID: <20200203151110.GN4064@sv.lnf.it> Mandi! leesteken--- via pve-user In chel di` si favelave... > > I'm missing something? Or simply my hardware is not supported? > Can you post the output from the following command? > find /sys/kernel/iommu_groups/ -type l root at ino:~# find /sys/kernel/iommu_groups/ -type l root at ino:~# none. So seems it is not supported... right? -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From mityapetuhov at gmail.com Mon Feb 3 17:00:10 2020 From: mityapetuhov at gmail.com (Dmitry Petuhov) Date: Mon, 3 Feb 2020 19:00:10 +0300 Subject: [PVE-User] PVE6 and PCI(e) Passthrough... In-Reply-To: <20200203151110.GN4064@sv.lnf.it> References: <20200203114233.GI4064@sv.lnf.it> <20200203151110.GN4064@sv.lnf.it> Message-ID: 03.02.2020 18:11, Marco Gaiarin wrote: >> Can you post the output from the following command? >> find /sys/kernel/iommu_groups/ -type l > root at ino:~# find /sys/kernel/iommu_groups/ -type l > root at ino:~# > none. So seems it is not supported... right? It may be disabled in BIOS/UEFI. Or not supported by it (try to update, if there's no IOMMU/VT-d in BIOS/UEFI Setup). From gilberto.nunes32 at gmail.com Mon Feb 3 18:11:14 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Mon, 3 Feb 2020 14:11:14 -0300 Subject: [PVE-User] CPU Pinning... Message-ID: Hi there Is there any way to do cpu pinning in Proxmox 6??? Thanks --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From humbertos at ifsc.edu.br Mon Feb 3 18:35:16 2020 From: humbertos at ifsc.edu.br (Humberto Jose De Sousa) Date: Mon, 3 Feb 2020 14:35:16 -0300 (BRT) Subject: [PVE-User] LVM autoactivation failed with multipath over iSCSI In-Reply-To: References: <413ca3933a616bd30233b8fefb305154@verdnatura.es> <5b545e139718261531562fec0a49eede@verdnatura.es> <56518B49-6608-4161-8424-5088F92853BE@kmi.com> <4C4C3D36-0A76-43CB-B802-1FFCFEECE548@kmi.com> <202541d69d0b3a71b6ee87d103c7a84c@verdnatura.es> Message-ID: <337680261.683808.1580751316031.JavaMail.zimbra@ifsc.edu.br> Hi everyone. I'm using FC with multipath and have the same trouble. I am running Proxmox 5 and the problem happen only with the pve-kernel-4.15.18-24-pve. With the olders one the system start correctily. I did the workaround suggest by Stefan and the system start, but two of four path don't worked. I will use pve-kernel-4.15.18-23-pve and await a update. Humberto De: "Stefan M. Radman via pve-user" Para: "nada" , "PVE User List" Cc: "Stefan M. Radman" Enviadas: Segunda-feira, 3 de fevereiro de 2020 6:53:36 Assunto: Re: [PVE-User] LVM autoactivation failed with multipath over iSCSI _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From smr at kmi.com Mon Feb 3 20:21:22 2020 From: smr at kmi.com (Stefan M. Radman) Date: Mon, 3 Feb 2020 19:21:22 +0000 Subject: [PVE-User] LVM autoactivation failed with multipath over iSCSI In-Reply-To: <337680261.683808.1580751316031.JavaMail.zimbra@ifsc.edu.br> References: <413ca3933a616bd30233b8fefb305154@verdnatura.es> <5b545e139718261531562fec0a49eede@verdnatura.es> <56518B49-6608-4161-8424-5088F92853BE@kmi.com> <4C4C3D36-0A76-43CB-B802-1FFCFEECE548@kmi.com> <202541d69d0b3a71b6ee87d103c7a84c@verdnatura.es> <337680261.683808.1580751316031.JavaMail.zimbra@ifsc.edu.br> Message-ID: Hi Humberto I just finished the update to 6.1 of two FC multipath attached nodes and had to use the same workaround as the one I used for iSCSI multipath With "ExecStartPre=/bin/sleep 5" it worked but my situation is different than yours because I have only two FC paths per node. Maybe I'll try boot parameter rd.lvm.conf=0 later today as suggested by Nada. It might work after all. Stefan On Feb 3, 2020, at 18:35, Humberto Jose De Sousa > wrote: Hi everyone. I'm using FC with multipath and have the same trouble. I am running Proxmox 5 and the problem happen only with the pve-kernel-4.15.18-24-pve. With the olders one the system start correctily. I did the workaround suggest by Stefan and the system start, but two of four path don't worked. I will use pve-kernel-4.15.18-23-pve and await a update. Humberto ________________________________ De: "Stefan M. Radman via pve-user" > Para: "nada" >, "PVE User List" > Cc: "Stefan M. Radman" > Enviadas: Segunda-feira, 3 de fevereiro de 2020 6:53:36 Assunto: Re: [PVE-User] LVM autoactivation failed with multipath over iSCSI _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From jm at ginernet.com Tue Feb 4 07:54:37 2020 From: jm at ginernet.com (=?UTF-8?Q?Jos=c3=a9_Manuel_Giner?=) Date: Tue, 4 Feb 2020 07:54:37 +0100 Subject: [PVE-User] pve.proxmox.com has no SPF Message-ID: Hello, Hello, many ML emails received from *@pve.proxmox.com we get in the spambox, because the domain: pve.proxmox.com has not SPF record. Can you fix please? Thanks! -- Jos? Manuel Giner https://ginernet.com From gaio at sv.lnf.it Tue Feb 4 11:34:21 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Tue, 4 Feb 2020 11:34:21 +0100 Subject: [PVE-User] PVE6 and PCI(e) Passthrough... In-Reply-To: References: <20200203114233.GI4064@sv.lnf.it> <20200203151110.GN4064@sv.lnf.it> Message-ID: <20200204103421.GA3660@sv.lnf.it> Mandi! Dmitry Petuhov In chel di` si favelave... > It may be disabled in BIOS/UEFI. Bingo! It was disabled in BIOS! root at ino:~# find /sys/kernel/iommu_groups/ -type l /sys/kernel/iommu_groups/7/devices/0000:00:1c.2 /sys/kernel/iommu_groups/15/devices/0000:1e:00.0 /sys/kernel/iommu_groups/5/devices/0000:00:1c.0 /sys/kernel/iommu_groups/13/devices/0000:11:08.0 /sys/kernel/iommu_groups/13/devices/0000:10:00.0 /sys/kernel/iommu_groups/3/devices/0000:00:10.0 /sys/kernel/iommu_groups/3/devices/0000:00:10.1 /sys/kernel/iommu_groups/11/devices/0000:00:1e.0 /sys/kernel/iommu_groups/1/devices/0000:00:03.0 /sys/kernel/iommu_groups/8/devices/0000:00:1c.3 /sys/kernel/iommu_groups/6/devices/0000:00:1c.1 /sys/kernel/iommu_groups/14/devices/0000:1c:00.0 /sys/kernel/iommu_groups/4/devices/0000:00:1a.0 /sys/kernel/iommu_groups/12/devices/0000:00:1f.2 /sys/kernel/iommu_groups/12/devices/0000:00:1f.0 /sys/kernel/iommu_groups/12/devices/0000:00:1f.3 /sys/kernel/iommu_groups/2/devices/0000:00:08.0 /sys/kernel/iommu_groups/2/devices/0000:00:08.3 /sys/kernel/iommu_groups/2/devices/0000:00:08.1 /sys/kernel/iommu_groups/2/devices/0000:00:08.2 /sys/kernel/iommu_groups/10/devices/0000:00:1d.0 /sys/kernel/iommu_groups/0/devices/0000:00:00.0 /sys/kernel/iommu_groups/9/devices/0000:00:1c.4 Thanks! -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From lists at merit.unu.edu Tue Feb 4 12:53:41 2020 From: lists at merit.unu.edu (lists) Date: Tue, 4 Feb 2020 12:53:41 +0100 Subject: [PVE-User] bmc-watchdog curiosity Message-ID: Hi, We have recently enabled the IPMI bmc-watchdog on the hosts of our pve cluster with. Very simple, and we have seen it in action, so it works nicely. :-) We left /etc/default/pve-ha-manager at the default, meaning softdog. Here: https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x we can read about IPMI watchdog, and to configure it like this: options ipmi_watchdog action=power_cycle panic_wdt_timeout=10 The question is: would it give us anything, if we also configured that? As we have seen the bmc-watchdog in action, we know that it works and does the job. What added value would module "ipmi_watchdog" bring? (extra info: three-node hyper-converged pve cluster, still at 5.3-8) Thanks, MJ From smr at kmi.com Tue Feb 4 16:26:09 2020 From: smr at kmi.com (Stefan M. Radman) Date: Tue, 4 Feb 2020 15:26:09 +0000 Subject: PVE 6.1 upgrade and iSCSI Message-ID: Just wanted to share an experience made during a recent upgrade to 6.1 of a node that uses iSCSI storage. The upgrade ultimately failed because dpkg refused to upgrade the open-iscsi package due to a failure in the postinst script of the open-iscsi package. See the installation log below for details. The problem was a failed iSCSI session (1 out of 4). Only after fixing the source of the issue (a duplicate IP address on the storage VLAN!) was I able to restart and complete the upgrade. Lesson learned: better have your logs clean before the upgrade to 6.x or your problems won't get less. Stefan PS: and run pve5to6 until no failures and no warnings.. root at proxmox:~# apt dist-upgrade . . Setting up open-iscsi (2.0.874-7.1) ... open-iscsi postinst: since the check in preinst some iSCSI sessions have failed. -> will wait 30s for automatic recovery open-iscsi postinst: some sessions are still in failed state -> iscsid will be restarted regardless, since that may actually help with the session recovery. dpkg: error processing package open-iscsi (--configure): installed open-iscsi package post-installation script subprocess returned error exit status 1 . . Errors were encountered while processing: open-iscsi E: Sub-process /usr/bin/dpkg returned an error code (1) root at proxmox:~# CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From dietmar at proxmox.com Tue Feb 4 18:40:09 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Tue, 4 Feb 2020 18:40:09 +0100 (CET) Subject: [PVE-User] bmc-watchdog curiosity In-Reply-To: References: Message-ID: <1974494460.129.1580838009040@webmail.proxmox.com> > Here: https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x we can > read about IPMI watchdog, and to configure it like this: > options ipmi_watchdog action=power_cycle panic_wdt_timeout=10 > > The question is: would it give us anything, if we also configured that? > As we have seen the bmc-watchdog in action, we know that it works and > does the job. > > What added value would module "ipmi_watchdog" bring? In theory, a HW watchdog is considered more reliable than softdog. In practice, softdog works extremely well, and HW watchdogs fail because of miss-configuration or IPMI bios bugs... So IMHO you do not get much added value using ipmi_watchdog. But this is more a feeling - I do not have numbers. From lists at merit.unu.edu Tue Feb 4 21:09:35 2020 From: lists at merit.unu.edu (mj) Date: Tue, 4 Feb 2020 21:09:35 +0100 Subject: [PVE-User] bmc-watchdog curiosity In-Reply-To: <1974494460.129.1580838009040@webmail.proxmox.com> References: <1974494460.129.1580838009040@webmail.proxmox.com> Message-ID: <2bbb45b5-c52f-4348-b9a8-2ad1692dc00a@merit.unu.edu> Hi Dietmar, On 2/4/20 6:40 PM, Dietmar Maurer wrote: > In theory, a HW watchdog is considered more reliable than softdog. > > In practice, softdog works extremely well, and HW watchdogs fail because > of miss-configuration or IPMI bios bugs... > > So IMHO you do not get much added value using ipmi_watchdog. > > But this is more a feeling - I do not have numbers. That is exactly the kind of feedback we were after. We'll keep it the way it is now, with softdog and bmc-watchdog for resetting a crashed machine. Thanks! MJ From josh at noobbox.com Tue Feb 4 22:36:18 2020 From: josh at noobbox.com (Josh Knight) Date: Tue, 4 Feb 2020 16:36:18 -0500 Subject: [PVE-User] CPU Pinning... In-Reply-To: References: Message-ID: One method I came across was from this random github script. I haven't actually tried it, so the script itself may not work on Proxmox 6. https://gist.github.com/ayufan/37be5c0b8fd26113a8be Essentially it's using `qm monitor ` to run the `info cpus` command to obtain the PIDs of the vCPU for the VM. Then using linux `taskset` to pin those to specific CPUs. This should work for manually setting up a VM each time it boots. I wish that proxmox would add this to its UI, this was actually one of the factors in leaving some of our hosts running Ubuntu. It's pretty easy to do vCPU pinning with libvirt. Josh On Mon, Feb 3, 2020 at 12:12 PM Gilberto Nunes wrote: > Hi there > > Is there any way to do cpu pinning in Proxmox 6??? > > Thanks > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From Bryan at bryanfields.net Wed Feb 5 02:35:45 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Tue, 4 Feb 2020 20:35:45 -0500 Subject: [PVE-User] Solved: Firewall Filter ICMP types and connection limiting In-Reply-To: <4aaf9c61-9d01-ad9d-0544-d8ec1f3a1580@bryanfields.net> References: <4aaf9c61-9d01-ad9d-0544-d8ec1f3a1580@bryanfields.net> Message-ID: <62b6f85c-6900-c6ad-c5be-00f18f3e12ee@bryanfields.net> On 2/1/20 11:47 PM, Bryan Fields wrote: > greetings, > I have a policy in iptables for forwared traffic below : > > iptables -t filter -A INPUT -j ACCEPT --in-interface $INET_IF --protocol \ > icmp --icmp-type echo-request --match limit --limit 4/s --limit-burst 3 > > iptables -t filter -A INPUT -j log-and-drop --in-interface $INET_IF \ > --protocol icmp --icmp-type echo-request > > I've attempted to set this up in the gui, but there's no option to add the > ICMP type, only IP type, and nothing for the match option. If I add this in > the config file, it's deleted upon the next time I look at it. I've found the following to be true with Proxmox: 1. The ICMP type can be put as text or numeric in the port field. this is undocumented, but it is in the code at: /usr/share/perl5/PVE/Firewall.pm 2. ProxMox will respect any filters already loaded in ip/ip6tables. This is really nice and props to the guys that coded this. As an example: Chain INPUT (policy ACCEPT) target prot opt source destination PVEFW-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT) target prot opt source destination PVEFW-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 By default Proxmox will jump all traffic input into PVEFW-INPUT, and then chain it's stuff off there. When installing/reseting/deleting/etc. Proxmox managed entries it does it all in it's own chain. This means we can hook into it by making our own chain and installing it before it. As Proxmox will not mess with this non-managed chain we can do anything we want in it, and so long as we don't do a drop all, traffic will flow into the Proxmox chains. What I did was to create a /etc/pve/localfirewall.sh script (is there a better place to put this?) and call it upon boot from /etc/network/interfaces: auto vmbr44 iface vmbr44 inet manual bridge_ports eth0.41 bridge_stp off bridge_fd 0 up bash /etc/pve/localfirewall.sh I've attached my script for reference. Is there anything I'm missing here about this being a non-good solution? If not, I'd like to add this on the wiki, how does one go about getting an account? Thanks, -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From Bryan at bryanfields.net Wed Feb 5 03:10:41 2020 From: Bryan at bryanfields.net (Bryan Fields) Date: Tue, 4 Feb 2020 21:10:41 -0500 Subject: [PVE-User] Solved: Firewall Filter ICMP types and connection limiting In-Reply-To: <62b6f85c-6900-c6ad-c5be-00f18f3e12ee@bryanfields.net> References: <4aaf9c61-9d01-ad9d-0544-d8ec1f3a1580@bryanfields.net> <62b6f85c-6900-c6ad-c5be-00f18f3e12ee@bryanfields.net> Message-ID: <21ac1a3c-88f1-fb7d-5c01-95aae7dfa80f@bryanfields.net> On 2/4/20 8:35 PM, Bryan Fields wrote: > I've attached my script for reference. Looks like it was blocked. It's on the web server below: http://keekles.org/~bryan/localfirewall.sh.txt -- Bryan Fields 727-409-1194 - Voice http://bryanfields.net From f.gruenbichler at proxmox.com Wed Feb 5 08:19:52 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Wed, 05 Feb 2020 08:19:52 +0100 Subject: [PVE-User] Solved: Firewall Filter ICMP types and connection limiting In-Reply-To: <62b6f85c-6900-c6ad-c5be-00f18f3e12ee@bryanfields.net> References: <4aaf9c61-9d01-ad9d-0544-d8ec1f3a1580@bryanfields.net> <62b6f85c-6900-c6ad-c5be-00f18f3e12ee@bryanfields.net> Message-ID: <1580886418.u3zralffmv.astroid@nora.none> On February 5, 2020 2:35 am, Bryan Fields wrote: > On 2/1/20 11:47 PM, Bryan Fields wrote: >> greetings, >> I have a policy in iptables for forwared traffic below : >> >> iptables -t filter -A INPUT -j ACCEPT --in-interface $INET_IF --protocol \ >> icmp --icmp-type echo-request --match limit --limit 4/s --limit-burst 3 >> >> iptables -t filter -A INPUT -j log-and-drop --in-interface $INET_IF \ >> --protocol icmp --icmp-type echo-request >> >> I've attempted to set this up in the gui, but there's no option to add the >> ICMP type, only IP type, and nothing for the match option. If I add this in >> the config file, it's deleted upon the next time I look at it. > > I've found the following to be true with Proxmox: > > 1. The ICMP type can be put as text or numeric in the port field. > this is undocumented, but it is in the code at: > /usr/share/perl5/PVE/Firewall.pm yes, that should probably be handled with better regards to usability ;) > > 2. ProxMox will respect any filters already loaded in ip/ip6tables. > This is really nice and props to the guys that coded this. > > As an example: > > Chain INPUT (policy ACCEPT) > target prot opt source destination > PVEFW-INPUT all -- 0.0.0.0/0 0.0.0.0/0 > > Chain FORWARD (policy ACCEPT) > target prot opt source destination > PVEFW-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 > > By default Proxmox will jump all traffic input into PVEFW-INPUT, and then > chain it's stuff off there. When installing/reseting/deleting/etc. Proxmox > managed entries it does it all in it's own chain. This means we can hook into > it by making our own chain and installing it before it. As Proxmox will not > mess with this non-managed chain we can do anything we want in it, and so long > as we don't do a drop all, traffic will flow into the Proxmox chains. > > What I did was to create a /etc/pve/localfirewall.sh script (is there a better > place to put this?) and call it upon boot from /etc/network/interfaces: if you have a cluster and want to sync it across the whole cluster, then you can put it into /etc/pve. that file system does have file size limits though and only comes up after networking, so if you don't need it, you might be better off putting it somewhere local (e.g., somewhere else in /etc). > > auto vmbr44 > iface vmbr44 inet manual > bridge_ports eth0.41 > bridge_stp off > bridge_fd 0 > up bash /etc/pve/localfirewall.sh > > > I've attached my script for reference. > > Is there anything I'm missing here about this being a non-good solution? > If not, I'd like to add this on the wiki, how does one go about getting an > account? if you need a wiki account, you can contact office at proxmox.com (we closed public registration because of spam). most of the documentation now lives in the admin guide[1] though, so it might be more worthwhile to generalize it and send a patch for inclusion to pve-devel[2]. the reference docs/admin guide is shipped with every installation, and heavily linked to from the web interface so lots more people read it :) 1: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pve_firewall 2: https://pve.proxmox.com/wiki/Developer_Documentation From gaio at sv.lnf.it Wed Feb 5 09:43:10 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 5 Feb 2020 09:43:10 +0100 Subject: [PVE-User] bmc-watchdog curiosity In-Reply-To: <1974494460.129.1580838009040@webmail.proxmox.com> References: <1974494460.129.1580838009040@webmail.proxmox.com> Message-ID: <20200205084310.GA2741@sv.lnf.it> Mandi! Dietmar Maurer In chel di` si favelave... > In theory, a HW watchdog is considered more reliable than softdog. Just we are here... 'pve-ha-manager' is an alternative to 'watchdog', right? Looking at debian package seems so, to me: root at thor:~# apt-cache show pve-ha-manager Package: pve-ha-manager Architecture: amd64 Version: 2.0-9 Priority: optional Section: perl Maintainer: Proxmox Support Team Installed-Size: 227 Depends: libjson-perl, libpve-common-perl, pve-cluster (>= 3.0-17), systemd, init-system-helpers (>= 1.18~), perl, libc6 (>= 2.7) Conflicts: watchdog [...] but i've not seens this reported on documentation... wiki or manuals. Also, 'watchdog' deaemon do other things, like reboot if load go over a theresold and so on, all things that probably are BAD in a virtualized environment. But probably sysadmin are used to configure it, so... i think it worth a note. Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From dietmar at proxmox.com Wed Feb 5 11:19:50 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Wed, 5 Feb 2020 11:19:50 +0100 (CET) Subject: [PVE-User] bmc-watchdog curiosity In-Reply-To: <20200205084310.GA2741@sv.lnf.it> References: <1974494460.129.1580838009040@webmail.proxmox.com> <20200205084310.GA2741@sv.lnf.it> Message-ID: <36141208.100.1580897990922@webmail.proxmox.com> > Just we are here... 'pve-ha-manager' is an alternative to 'watchdog', > right? You cannot use the debian watchdog package with proxmox. > Also, 'watchdog' deaemon do other things, like reboot if load go over a > theresold and so on, all things that probably are BAD in a virtualized > environment. > But probably sysadmin are used to configure it, so... i think it worth a > note. Packages already conflicts, so that prevents accidental installation. From piccardi at truelite.it Wed Feb 5 11:20:56 2020 From: piccardi at truelite.it (Simone Piccardi) Date: Wed, 5 Feb 2020 11:20:56 +0100 Subject: Misleading documentation for qm importdisk Message-ID: Hi, I was importing some VirtualBox VM disk (.vdi) in to a KVM VM in proxmox (6.1). Reading in the man page: qm importdisk [OPTIONS] Import an external disk image as an unused disk in a VM. The image format has to be supported by qemu-img(1). I just tried someting like: qm importdisk 100 diskimage.vdi local-lvm but this just copied the diskimage.vdi content inside the disk image, without conversion. So to make it works I had to convert the VDI image to RAW (using qemu-img) and then use qm importdisk with the raw file. So it seems that VDI format is supported by qemu-img, but is not usable with qm importdisk. It will be better to clearly states in the qm man page which are the image formats supported by importdisk, as they seems to be a subset of the ones supported by qemu-img. Regards Simone From a.antreich at proxmox.com Wed Feb 5 12:17:35 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Wed, 5 Feb 2020 12:17:35 +0100 Subject: [PVE-User] Misleading documentation for qm importdisk In-Reply-To: References: Message-ID: <20200205111735.GA3741436@dona.proxmox.com> Hello Simone, On Wed, Feb 05, 2020 at 11:20:56AM +0100, Simone Piccardi via pve-user wrote: > Date: Wed, 5 Feb 2020 11:20:56 +0100 > From: Simone Piccardi > To: PVE User List > Subject: Misleading documentation for qm importdisk > User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 > Thunderbird/68.4.1 > > Hi, > > I was importing some VirtualBox VM disk (.vdi) in to a KVM VM in proxmox > (6.1). Reading in the man page: > > qm importdisk [OPTIONS] > > Import an external disk image as an unused disk in a VM. The image > format has to be supported by qemu-img(1). > > I just tried someting like: > > qm importdisk 100 diskimage.vdi local-lvm This should work. On which pveversion -v are you? -- Cheers, Alwin From gbr at majentis.com Mon Feb 10 16:49:28 2020 From: gbr at majentis.com (Gerald Brandt) Date: Mon, 10 Feb 2020 09:49:28 -0600 Subject: [PVE-User] Can't start VM In-Reply-To: References: Message-ID: <8973b3de-1266-9fc4-faf9-2cfcd147c905@majentis.com> Hi, I have a small cluster of 4 servers. One server in the cluster seems to be having an issue. For the last two weeks, one or two of the VMs seem to psuedo lock up over the weekend. I can still VNC in and type in a login, but after typing the password, the system never responds. Also, all services on that VM (web, version control) are non-responsive. A reset from VM console doesn't work, I need to do a stop and start. However, on start, I get this from the Proxmox server, and the VM never starts: Feb 10 09:29:32 proxmox-2 pvedaemon[1142]: start VM 107: UPID:proxmox-2:00000476:0311D642:5E4176DC:qmstart:107:root at pam: Feb 10 09:29:32 proxmox-2 pvedaemon[16365]: starting task UPID:proxmox-2:00000476:0311D642:5E4176DC:qmstart:107:root at pam: Feb 10 09:29:34 proxmox-2 systemd[1]: Started Session 266 of user root. Feb 10 09:29:35 proxmox-2 qm[1236]: VM 107 qmp command failed - VM 107 not running Feb 10 09:29:37 proxmox-2 pvedaemon[1142]: timeout waiting on systemd Feb 10 09:29:37 proxmox-2 pvedaemon[16365]: end task UPID:proxmox-2:00000476:0311D642:5E4176DC:qmstart:107:root at pam: timeout waiting on system I have to migrate all the VMs off the server and reboot the server. Any ideas? Gerald # pveversion --verbose proxmox-ve: 5.4-2 (running kernel: 4.15.18-24-pve) pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) pve-kernel-4.15: 5.4-12 pve-kernel-4.15.18-24-pve: 4.15.18-52 pve-kernel-4.15.18-21-pve: 4.15.18-48 pve-kernel-4.15.18-18-pve: 4.15.18-44 pve-kernel-4.15.18-13-pve: 4.15.18-37 pve-kernel-4.15.18-12-pve: 4.15.18-36 pve-kernel-4.15.18-11-pve: 4.15.18-34 pve-kernel-4.15.18-10-pve: 4.15.18-32 pve-kernel-4.15.18-7-pve: 4.15.18-27 pve-kernel-4.15.18-5-pve: 4.15.18-24 pve-kernel-4.15.18-4-pve: 4.15.18-23 pve-kernel-4.15.17-3-pve: 4.15.17-14 pve-kernel-4.13.13-5-pve: 4.13.13-38 pve-kernel-4.13.13-2-pve: 4.13.13-33 pve-kernel-4.13.4-1-pve: 4.13.4-26 corosync: 2.4.4-pve1 criu: 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon: 1.2-2 libjs-extjs: 6.0.1-2 libpve-access-control: 5.1-12 libpve-apiclient-perl: 2.0-5 libpve-common-perl: 5.0-56 libpve-guest-common-perl: 2.0-20 libpve-http-server-perl: 2.0-14 libpve-storage-perl: 5.0-44 libqb0: 1.0.3-1~bpo9 lvm2: 2.02.168-pve6 lxc-pve: 3.1.0-7 lxcfs: 3.0.3-pve1 novnc-pve: 1.0.0-3 proxmox-widget-toolkit: 1.0-28 pve-cluster: 5.0-38 pve-container: 2.0-41 pve-docs: 5.4-2 pve-edk2-firmware: 1.20190312-1 pve-firewall: 3.0-22 pve-firmware: 2.0-7 pve-ha-manager: 2.0-9 pve-i18n: 1.1-4 pve-libspice-server1: 0.14.1-2 pve-qemu-kvm: 3.0.1-4 pve-xtermjs: 3.12.0-1 qemu-server: 5.0-55 smartmontools: 6.5+svn4324-1 spiceterm: 3.0-5 vncterm: 1.5-3 zfsutils-linux: 0.7.13-pve1~bpo2 From leesteken at pm.me Mon Feb 10 17:04:32 2020 From: leesteken at pm.me (leesteken at pm.me) Date: Mon, 10 Feb 2020 16:04:32 +0000 Subject: [PVE-User] Can't start VM In-Reply-To: <8973b3de-1266-9fc4-faf9-2cfcd147c905@majentis.com> References: <8973b3de-1266-9fc4-faf9-2cfcd147c905@majentis.com> Message-ID: ??????? Original Message ??????? On Monday, February 10, 2020 4:49 PM, Gerald Brandt wrote: > Hi, > > I have a small cluster of 4 servers. One server in the cluster seems to > be having an issue. For the last two weeks, one or two of the VMs seem > to psuedo lock up over the weekend. I can still VNC in and type in a > login, but after typing the password, the system never responds. Also, I've seen this exact behavior if the memory inside the VM is full. It sound like the system cannot start a login shell. If you are using ballooning, it might let you login if you increase the available memory in the VM using the Monitor: info balloon. > all services on that VM (web, version control) are non-responsive. A > reset from VM console doesn't work, I need to do a stop and start. > > However, on start, I get this from the Proxmox server, and the VM never > starts: > > Feb 10 09:29:32 proxmox-2 pvedaemon[1142]: start VM 107: > UPID:proxmox-2:00000476:0311D642:5E4176DC:qmstart:107:root at pam: > Feb 10 09:29:32 proxmox-2 pvedaemon[16365]: root at pam starting task > UPID:proxmox-2:00000476:0311D642:5E4176DC:qmstart:107:root at pam: > Feb 10 09:29:34 proxmox-2 systemd[1]: Started Session 266 of user root. > Feb 10 09:29:35 proxmox-2 qm[1236]: VM 107 qmp command failed - VM 107 > not running > Feb 10 09:29:37 proxmox-2 pvedaemon[1142]: timeout waiting on systemd > Feb 10 09:29:37 proxmox-2 pvedaemon[16365]: root at pam end task > UPID:proxmox-2:00000476:0311D642:5E4176DC:qmstart:107:root at pam: timeout > waiting on system > > I have to migrate all the VMs off the server and reboot the server. Any > ideas? Sorry, I have no experience with clusters. Would Proxmox not do this automatically when you shutdown the node? kind regards, Arjen From gbr at majentis.com Mon Feb 10 17:11:43 2020 From: gbr at majentis.com (Gerald Brandt) Date: Mon, 10 Feb 2020 10:11:43 -0600 Subject: [PVE-User] Can't start VM In-Reply-To: References: <8973b3de-1266-9fc4-faf9-2cfcd147c905@majentis.com> Message-ID: On 2020-02-10 10:04 a.m., leesteken--- via pve-user wrote: > Sorry, I have no experience with clusters. > Would Proxmox not do this automatically when you shutdown the node? Only if I have HA turned on, and I do not. Gerald > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From laurentfdumont at gmail.com Mon Feb 10 19:06:03 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 10 Feb 2020 13:06:03 -0500 Subject: [PVE-User] Can't start VM In-Reply-To: References: <8973b3de-1266-9fc4-faf9-2cfcd147c905@majentis.com> Message-ID: This does feel like a resource exhaustion issue. Do you have any external monitoring in place for that Proxmox server? On Mon, Feb 10, 2020 at 11:11 AM Gerald Brandt wrote: > > On 2020-02-10 10:04 a.m., leesteken--- via pve-user wrote: > > Sorry, I have no experience with clusters. > > Would Proxmox not do this automatically when you shutdown the node? > > > Only if I have HA turned on, and I do not. > > Gerald > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gbr at majentis.com Mon Feb 10 19:47:19 2020 From: gbr at majentis.com (Gerald Brandt) Date: Mon, 10 Feb 2020 12:47:19 -0600 Subject: [PVE-User] Can't start VM In-Reply-To: References: <8973b3de-1266-9fc4-faf9-2cfcd147c905@majentis.com> Message-ID: On 2020-02-10 12:06 p.m., Laurent Dumont wrote: > This does feel like a resource exhaustion issue. Do you have any external > monitoring in place for that Proxmox server? I don't. I'll get something in place. htop showed good memory usage, disk space was fine. I noticed NFS dropped off for a bit, so looking into that. It was up when I checked out the system. Gerald > On Mon, Feb 10, 2020 at 11:11 AM Gerald Brandt wrote: > >> On 2020-02-10 10:04 a.m., leesteken--- via pve-user wrote: >>> Sorry, I have no experience with clusters. >>> Would Proxmox not do this automatically when you shutdown the node? >> >> Only if I have HA turned on, and I do not. >> >> Gerald >> >> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From dor at volz.ua Tue Feb 11 12:25:52 2020 From: dor at volz.ua (Dmytro O. Redchuk) Date: Tue, 11 Feb 2020 13:25:52 +0200 Subject: Per-VM backup hook scripts? Message-ID: <20200211112552.GM28126@volz.ua> Hi masters, please is it possible to attach backup hook scripts on per-vm basics, via GUI or CLI? -- Dmytro O. Redchuk From t.lamprecht at proxmox.com Tue Feb 11 15:52:49 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Tue, 11 Feb 2020 15:52:49 +0100 Subject: [PVE-User] Per-VM backup hook scripts? In-Reply-To: <20200211112552.GM28126@volz.ua> References: <20200211112552.GM28126@volz.ua> Message-ID: <0a86fa47-1fdf-2660-25cb-88a11347a811@proxmox.com> Hi, On 2/11/20 12:25 PM, Dmytro O. Redchuk wrote: > Hi masters, > > please is it possible to attach backup hook scripts on per-vm basics, > via GUI or CLI? > Currently you only can specify a hook script for a whole backup job. Either by uncommenting and setting the node-wide "script: /path/to/script" setting or passing the "--script" variable to a specific `vzdump` call. Bur as the VMID gets passed to the backup script so you can do specific steps or actions based on that. Also calling a specific per VM script is possible. See: /usr/share/doc/pve-manager/examples/vzdump-hook-script.pl on a Proxmox VE installation for an example. cheers, Thomas From dor at volz.ua Tue Feb 11 16:32:19 2020 From: dor at volz.ua (Dmytro O. Redchuk) Date: Tue, 11 Feb 2020 17:32:19 +0200 Subject: Per-VM backup hook scripts? In-Reply-To: <0a86fa47-1fdf-2660-25cb-88a11347a811@proxmox.com> References: <20200211112552.GM28126@volz.ua> <0a86fa47-1fdf-2660-25cb-88a11347a811@proxmox.com> Message-ID: <20200211153219.GV28126@volz.ua> ? ??., 11-?? ???. 2020, ? 15:52 Thomas Lamprecht wrote: > Hi, > > On 2/11/20 12:25 PM, Dmytro O. Redchuk wrote: > > Hi masters, > > > > please is it possible to attach backup hook scripts on per-vm basics, > > via GUI or CLI? > > > > Currently you only can specify a hook script for a whole backup job. > Either by uncommenting and setting the node-wide "script: /path/to/script" > setting or passing the "--script" variable to a specific `vzdump` call. Ok, thank you Thomas for your tip :) > Bur as the VMID gets passed to the backup script so you can do specific steps > or actions based on that. Also calling a specific per VM script is possible. > > See: > /usr/share/doc/pve-manager/examples/vzdump-hook-script.pl > > on a Proxmox VE installation for an example. Yes, this is an option, thank you. -- Dmytro O. Redchuk From gilberto.nunes32 at gmail.com Thu Feb 13 11:13:48 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 13 Feb 2020 07:13:48 -0300 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: Message-ID: HI all Still in trouble with this issue cat daemon.log | grep "Feb 12 22:10" Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication runner... Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication runner. Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 (qemu) Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no such volume 'local-lvm:vm-110-disk-0' syslog Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 (qemu) Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock backup Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no such volume 'local-lvm:vm-110-disk-0' pveversion pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) pve-kernel-4.15: 5.4-12 pve-kernel-4.15.18-24-pve: 4.15.18-52 pve-kernel-4.15.18-12-pve: 4.15.18-36 corosync: 2.4.4-pve1 criu: 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon: 1.2-2 libjs-extjs: 6.0.1-2 libpve-access-control: 5.1-12 libpve-apiclient-perl: 2.0-5 libpve-common-perl: 5.0-56 libpve-guest-common-perl: 2.0-20 libpve-http-server-perl: 2.0-14 libpve-storage-perl: 5.0-44 libqb0: 1.0.3-1~bpo9 lvm2: 2.02.168-pve6 lxc-pve: 3.1.0-7 lxcfs: 3.0.3-pve1 novnc-pve: 1.0.0-3 proxmox-widget-toolkit: 1.0-28 pve-cluster: 5.0-38 pve-container: 2.0-41 pve-docs: 5.4-2 pve-edk2-firmware: 1.20190312-1 pve-firewall: 3.0-22 pve-firmware: 2.0-7 pve-ha-manager: 2.0-9 pve-i18n: 1.1-4 pve-libspice-server1: 0.14.1-2 pve-qemu-kvm: 3.0.1-4 pve-xtermjs: 3.12.0-1 qemu-server: 5.0-55 smartmontools: 6.5+svn4324-1 spiceterm: 3.0-5 vncterm: 1.5-3 zfsutils-linux: 0.7.13-pve1~bpo2 Some help??? Sould I upgrade the server to 6.x?? Thanks --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < gilberto.nunes32 at gmail.com> escreveu: > Hi there > > I got a strage error last night. Vzdump complain about the > disk no exist or lvm volume in this case but the volume exist, indeed! > In the morning I have do a manually backup and it's working fine... > Any advice? > > 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) > 112: 2020-01-29 22:20:02 INFO: status = running > 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup > 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 > 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' 'local-lvm:vm-112-disk-0' 120G > 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such volume 'local-lvm:vm-112-disk-0' > > 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) > 116: 2020-01-29 22:20:23 INFO: status = running > 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup > 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 > 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' 'local-lvm:vm-116-disk-0' 100G > 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such volume 'local-lvm:vm-116-disk-0' > > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > From gaio at sv.lnf.it Thu Feb 13 11:47:47 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Thu, 13 Feb 2020 11:47:47 +0100 Subject: [PVE-User] Network interfaces renaming strangeness... Message-ID: <20200213104747.GL3460@sv.lnf.it> I've setup a little 'home' PVE server: proxmox6 (debian buster, kernel 5.3.18-1-pve). Bultin network card get detected and renamed: root at ino:~# grep tg3 /var/log/kern.log Feb 11 09:16:42 ino kernel: [ 3.449190] tg3.c:v3.137 (May 11, 2014) Feb 11 09:16:42 ino kernel: [ 3.468877] tg3 0000:1e:00.0 eth0: Tigon3 [partno(BCM95723) rev 5784100] (PCI Express) MAC address 9c:8e:99:7b:86:d9 Feb 11 09:16:42 ino kernel: [ 3.468879] tg3 0000:1e:00.0 eth0: attached PHY is 5784 (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[0]) Feb 11 09:16:42 ino kernel: [ 3.468881] tg3 0000:1e:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] Feb 11 09:16:42 ino kernel: [ 3.468883] tg3 0000:1e:00.0 eth0: dma_rwctrl[76180000] dma_mask[64-bit] Feb 11 09:16:42 ino kernel: [ 3.485088] tg3 0000:1e:00.0 ens1: renamed from eth0 Feb 11 09:16:44 ino kernel: [ 101.292206] tg3 0000:1e:00.0 ens1: Link is up at 100 Mbps, full duplex Feb 11 09:16:44 ino kernel: [ 101.292208] tg3 0000:1e:00.0 ens1: Flow control is on for TX and on for RX After that, i've addedd a second nic, pcie slot, that get detected but 'strangely' renamed: root at ino:~# grep e1000e /var/log/kern.log Feb 11 09:16:42 ino kernel: [ 3.451250] e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k Feb 11 09:16:42 ino kernel: [ 3.451251] e1000e: Copyright(c) 1999 - 2015 Intel Corporation. Feb 11 09:16:42 ino kernel: [ 3.451430] e1000e 0000:20:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode Feb 11 09:16:42 ino kernel: [ 3.502028] e1000e 0000:20:00.0 0000:20:00.0 (uninitialized): registered PHC clock Feb 11 09:16:42 ino kernel: [ 3.556137] e1000e 0000:20:00.0 eth0: (PCI Express:2.5GT/s:Width x1) 2c:27:d7:14:9b:67 Feb 11 09:16:42 ino kernel: [ 3.556139] e1000e 0000:20:00.0 eth0: Intel(R) PRO/1000 Network Connection Feb 11 09:16:42 ino kernel: [ 3.556182] e1000e 0000:20:00.0 eth0: MAC: 3, PHY: 8, PBA No: G17305-003 Feb 11 09:16:42 ino kernel: [ 3.557393] e1000e 0000:20:00.0 rename3: renamed from eth0 Looking at: https://wiki.debian.org/NetworkInterfaceNames Strangeness remain, look at: root at ino:~# udevadm test-builtin net_id /sys/class/net/ens1 Load module index Parsed configuration file /usr/lib/systemd/network/99-default.link Created link configuration context. Using default interface naming scheme 'v240'. ID_NET_NAMING_SCHEME=v240 ID_NET_NAME_MAC=enx9c8e997b86d9 ID_OUI_FROM_DATABASE=Hewlett Packard ID_NET_NAME_PATH=enp30s0 ID_NET_NAME_SLOT=ens1 Unload module index Unloaded link configuration context. root at ino:~# udevadm test-builtin net_id /sys/class/net/rename3 Load module index Parsed configuration file /usr/lib/systemd/network/99-default.link Created link configuration context. Using default interface naming scheme 'v240'. ID_NET_NAMING_SCHEME=v240 ID_NET_NAME_MAC=enx2c27d7149b67 ID_OUI_FROM_DATABASE=Hewlett Packard ID_NET_NAME_PATH=enp32s0 ID_NET_NAME_SLOT=ens1 Unload module index Unloaded link configuration context. AFAI've understood network names would have to be 'enp30s0' and 'enp32s0' respectively, not 'ens1' and 'rename3'... Why?! Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From aderumier at odiso.com Thu Feb 13 11:55:29 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Thu, 13 Feb 2020 11:55:29 +0100 (CET) Subject: [PVE-User] Network interfaces renaming strangeness... In-Reply-To: <20200213104747.GL3460@sv.lnf.it> References: <20200213104747.GL3460@sv.lnf.it> Message-ID: <333377406.3248453.1581591329907.JavaMail.zimbra@odiso.com> hi, is it a fresh proxmox6 install ? or upgraded from proxmox5? any /etc/udev/rules.d/70-persistent-net.rules file somewhere ? (should be removed) no special grub option ? (net.ifnames, ...) ----- Mail original ----- De: "Marco Gaiarin" ?: "proxmoxve" Envoy?: Jeudi 13 F?vrier 2020 11:47:47 Objet: [PVE-User] Network interfaces renaming strangeness... I've setup a little 'home' PVE server: proxmox6 (debian buster, kernel 5.3.18-1-pve). Bultin network card get detected and renamed: root at ino:~# grep tg3 /var/log/kern.log Feb 11 09:16:42 ino kernel: [ 3.449190] tg3.c:v3.137 (May 11, 2014) Feb 11 09:16:42 ino kernel: [ 3.468877] tg3 0000:1e:00.0 eth0: Tigon3 [partno(BCM95723) rev 5784100] (PCI Express) MAC address 9c:8e:99:7b:86:d9 Feb 11 09:16:42 ino kernel: [ 3.468879] tg3 0000:1e:00.0 eth0: attached PHY is 5784 (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[0]) Feb 11 09:16:42 ino kernel: [ 3.468881] tg3 0000:1e:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] Feb 11 09:16:42 ino kernel: [ 3.468883] tg3 0000:1e:00.0 eth0: dma_rwctrl[76180000] dma_mask[64-bit] Feb 11 09:16:42 ino kernel: [ 3.485088] tg3 0000:1e:00.0 ens1: renamed from eth0 Feb 11 09:16:44 ino kernel: [ 101.292206] tg3 0000:1e:00.0 ens1: Link is up at 100 Mbps, full duplex Feb 11 09:16:44 ino kernel: [ 101.292208] tg3 0000:1e:00.0 ens1: Flow control is on for TX and on for RX After that, i've addedd a second nic, pcie slot, that get detected but 'strangely' renamed: root at ino:~# grep e1000e /var/log/kern.log Feb 11 09:16:42 ino kernel: [ 3.451250] e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k Feb 11 09:16:42 ino kernel: [ 3.451251] e1000e: Copyright(c) 1999 - 2015 Intel Corporation. Feb 11 09:16:42 ino kernel: [ 3.451430] e1000e 0000:20:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode Feb 11 09:16:42 ino kernel: [ 3.502028] e1000e 0000:20:00.0 0000:20:00.0 (uninitialized): registered PHC clock Feb 11 09:16:42 ino kernel: [ 3.556137] e1000e 0000:20:00.0 eth0: (PCI Express:2.5GT/s:Width x1) 2c:27:d7:14:9b:67 Feb 11 09:16:42 ino kernel: [ 3.556139] e1000e 0000:20:00.0 eth0: Intel(R) PRO/1000 Network Connection Feb 11 09:16:42 ino kernel: [ 3.556182] e1000e 0000:20:00.0 eth0: MAC: 3, PHY: 8, PBA No: G17305-003 Feb 11 09:16:42 ino kernel: [ 3.557393] e1000e 0000:20:00.0 rename3: renamed from eth0 Looking at: https://wiki.debian.org/NetworkInterfaceNames Strangeness remain, look at: root at ino:~# udevadm test-builtin net_id /sys/class/net/ens1 Load module index Parsed configuration file /usr/lib/systemd/network/99-default.link Created link configuration context. Using default interface naming scheme 'v240'. ID_NET_NAMING_SCHEME=v240 ID_NET_NAME_MAC=enx9c8e997b86d9 ID_OUI_FROM_DATABASE=Hewlett Packard ID_NET_NAME_PATH=enp30s0 ID_NET_NAME_SLOT=ens1 Unload module index Unloaded link configuration context. root at ino:~# udevadm test-builtin net_id /sys/class/net/rename3 Load module index Parsed configuration file /usr/lib/systemd/network/99-default.link Created link configuration context. Using default interface naming scheme 'v240'. ID_NET_NAMING_SCHEME=v240 ID_NET_NAME_MAC=enx2c27d7149b67 ID_OUI_FROM_DATABASE=Hewlett Packard ID_NET_NAME_PATH=enp32s0 ID_NET_NAME_SLOT=ens1 Unload module index Unloaded link configuration context. AFAI've understood network names would have to be 'enp30s0' and 'enp32s0' respectively, not 'ens1' and 'rename3'... Why?! Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From elacunza at binovo.es Thu Feb 13 12:10:37 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 13 Feb 2020 12:10:37 +0100 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: Message-ID: Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? El 13/2/20 a las 11:13, Gilberto Nunes escribi?: > HI all > > Still in trouble with this issue > > cat daemon.log | grep "Feb 12 22:10" > Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication runner... > Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication runner. > Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 (qemu) > Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no > such volume 'local-lvm:vm-110-disk-0' > > syslog > Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 (qemu) > Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock backup > Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no > such volume 'local-lvm:vm-110-disk-0' > > pveversion > pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) > > proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) > pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) > pve-kernel-4.15: 5.4-12 > pve-kernel-4.15.18-24-pve: 4.15.18-52 > pve-kernel-4.15.18-12-pve: 4.15.18-36 > corosync: 2.4.4-pve1 > criu: 2.11.1-1~bpo90 > glusterfs-client: 3.8.8-1 > ksm-control-daemon: 1.2-2 > libjs-extjs: 6.0.1-2 > libpve-access-control: 5.1-12 > libpve-apiclient-perl: 2.0-5 > libpve-common-perl: 5.0-56 > libpve-guest-common-perl: 2.0-20 > libpve-http-server-perl: 2.0-14 > libpve-storage-perl: 5.0-44 > libqb0: 1.0.3-1~bpo9 > lvm2: 2.02.168-pve6 > lxc-pve: 3.1.0-7 > lxcfs: 3.0.3-pve1 > novnc-pve: 1.0.0-3 > proxmox-widget-toolkit: 1.0-28 > pve-cluster: 5.0-38 > pve-container: 2.0-41 > pve-docs: 5.4-2 > pve-edk2-firmware: 1.20190312-1 > pve-firewall: 3.0-22 > pve-firmware: 2.0-7 > pve-ha-manager: 2.0-9 > pve-i18n: 1.1-4 > pve-libspice-server1: 0.14.1-2 > pve-qemu-kvm: 3.0.1-4 > pve-xtermjs: 3.12.0-1 > qemu-server: 5.0-55 > smartmontools: 6.5+svn4324-1 > spiceterm: 3.0-5 > vncterm: 1.5-3 > zfsutils-linux: 0.7.13-pve1~bpo2 > > > Some help??? Sould I upgrade the server to 6.x?? > > Thanks > > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < > gilberto.nunes32 at gmail.com> escreveu: > >> Hi there >> >> I got a strage error last night. Vzdump complain about the >> disk no exist or lvm volume in this case but the volume exist, indeed! >> In the morning I have do a manually backup and it's working fine... >> Any advice? >> >> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) >> 112: 2020-01-29 22:20:02 INFO: status = running >> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup >> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 >> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' 'local-lvm:vm-112-disk-0' 120G >> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such volume 'local-lvm:vm-112-disk-0' >> >> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) >> 116: 2020-01-29 22:20:23 INFO: status = running >> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup >> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 >> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' 'local-lvm:vm-116-disk-0' 100G >> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such volume 'local-lvm:vm-116-disk-0' >> >> --- >> Gilberto Nunes Ferreira >> >> (47) 3025-5907 >> (47) 99676-7530 - Whatsapp / Telegram >> >> Skype: gilberto.nunes36 >> >> >> >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From gilberto.nunes32 at gmail.com Thu Feb 13 12:18:45 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 13 Feb 2020 08:18:45 -0300 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: Message-ID: n: Thu Feb 13 07:06:19 2020 a2web:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert backup iscsi -wi-ao---- 1.61t data pve twi-aotz-- 3.34t 88.21 9.53 root pve -wi-ao---- 96.00g snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g data vm-104-disk-0 snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g data vm-113-disk-0 swap pve -wi-ao---- 8.00g vm-101-disk-0 pve Vwi-aotz-- 50.00g data 24.17 vm-102-disk-0 pve Vwi-aotz-- 500.00g data 65.65 vm-103-disk-0 pve Vwi-aotz-- 300.00g data 37.28 vm-104-disk-0 pve Vwi-aotz-- 200.00g data 17.87 vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g data 35.53 vm-105-disk-0 pve Vwi-aotz-- 700.00g data 90.18 vm-106-disk-0 pve Vwi-aotz-- 150.00g data 93.55 vm-107-disk-0 pve Vwi-aotz-- 500.00g data 98.20 vm-108-disk-0 pve Vwi-aotz-- 200.00g data 98.02 vm-109-disk-0 pve Vwi-aotz-- 100.00g data 93.68 vm-110-disk-0 pve Vwi-aotz-- 100.00g data 34.55 vm-111-disk-0 pve Vwi-aotz-- 100.00g data 79.03 vm-112-disk-0 pve Vwi-aotz-- 120.00g data 93.78 vm-113-disk-0 pve Vwi-aotz-- 50.00g data 65.42 vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g data 43.64 vm-114-disk-0 pve Vwi-aotz-- 120.00g data 100.00 vm-115-disk-0 pve Vwi-a-tz-- 100.00g data 70.28 vm-115-disk-1 pve Vwi-a-tz-- 50.00g data 0.00 vm-116-disk-0 pve Vwi-aotz-- 100.00g data 26.34 vm-117-disk-0 pve Vwi-aotz-- 100.00g data 100.00 vm-118-disk-0 pve Vwi-aotz-- 100.00g data 100.00 vm-119-disk-0 pve Vwi-aotz-- 25.00g data 18.42 vm-121-disk-0 pve Vwi-aotz-- 100.00g data 23.76 vm-122-disk-0 pve Vwi-aotz-- 100.00g data 100.00 vm-123-disk-0 pve Vwi-aotz-- 150.00g data 37.89 vm-124-disk-0 pve Vwi-aotz-- 100.00g data 30.73 vm-125-disk-0 pve Vwi-aotz-- 50.00g data 9.02 vm-126-disk-0 pve Vwi-aotz-- 30.00g data 99.72 vm-127-disk-0 pve Vwi-aotz-- 50.00g data 10.79 vm-129-disk-0 pve Vwi-aotz-- 20.00g data 45.04 cat /etc/pve/storage.cfg dir: local path /var/lib/vz content backup,iso,vztmpl lvmthin: local-lvm thinpool data vgname pve content rootdir,images iscsi: iscsi portal some-portal target some-target content images lvm: iscsi-lvm vgname iscsi base iscsi:0.0.0.scsi-mpatha content rootdir,images shared 1 dir: backup path /backup content images,rootdir,iso,backup maxfiles 3 shared 0 --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza escreveu: > Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? > > El 13/2/20 a las 11:13, Gilberto Nunes escribi?: > > HI all > > > > Still in trouble with this issue > > > > cat daemon.log | grep "Feb 12 22:10" > > Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication > runner... > > Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication runner. > > Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 > (qemu) > > Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no > > such volume 'local-lvm:vm-110-disk-0' > > > > syslog > > Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 > (qemu) > > Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock backup > > Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no > > such volume 'local-lvm:vm-110-disk-0' > > > > pveversion > > pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) > > > > proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) > > pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) > > pve-kernel-4.15: 5.4-12 > > pve-kernel-4.15.18-24-pve: 4.15.18-52 > > pve-kernel-4.15.18-12-pve: 4.15.18-36 > > corosync: 2.4.4-pve1 > > criu: 2.11.1-1~bpo90 > > glusterfs-client: 3.8.8-1 > > ksm-control-daemon: 1.2-2 > > libjs-extjs: 6.0.1-2 > > libpve-access-control: 5.1-12 > > libpve-apiclient-perl: 2.0-5 > > libpve-common-perl: 5.0-56 > > libpve-guest-common-perl: 2.0-20 > > libpve-http-server-perl: 2.0-14 > > libpve-storage-perl: 5.0-44 > > libqb0: 1.0.3-1~bpo9 > > lvm2: 2.02.168-pve6 > > lxc-pve: 3.1.0-7 > > lxcfs: 3.0.3-pve1 > > novnc-pve: 1.0.0-3 > > proxmox-widget-toolkit: 1.0-28 > > pve-cluster: 5.0-38 > > pve-container: 2.0-41 > > pve-docs: 5.4-2 > > pve-edk2-firmware: 1.20190312-1 > > pve-firewall: 3.0-22 > > pve-firmware: 2.0-7 > > pve-ha-manager: 2.0-9 > > pve-i18n: 1.1-4 > > pve-libspice-server1: 0.14.1-2 > > pve-qemu-kvm: 3.0.1-4 > > pve-xtermjs: 3.12.0-1 > > qemu-server: 5.0-55 > > smartmontools: 6.5+svn4324-1 > > spiceterm: 3.0-5 > > vncterm: 1.5-3 > > zfsutils-linux: 0.7.13-pve1~bpo2 > > > > > > Some help??? Sould I upgrade the server to 6.x?? > > > > Thanks > > > > --- > > Gilberto Nunes Ferreira > > > > (47) 3025-5907 > > (47) 99676-7530 - Whatsapp / Telegram > > > > Skype: gilberto.nunes36 > > > > > > > > > > > > Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < > > gilberto.nunes32 at gmail.com> escreveu: > > > >> Hi there > >> > >> I got a strage error last night. Vzdump complain about the > >> disk no exist or lvm volume in this case but the volume exist, indeed! > >> In the morning I have do a manually backup and it's working fine... > >> Any advice? > >> > >> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) > >> 112: 2020-01-29 22:20:02 INFO: status = running > >> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup > >> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 > >> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' > 'local-lvm:vm-112-disk-0' 120G > >> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such > volume 'local-lvm:vm-112-disk-0' > >> > >> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) > >> 116: 2020-01-29 22:20:23 INFO: status = running > >> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup > >> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 > >> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' > 'local-lvm:vm-116-disk-0' 100G > >> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such > volume 'local-lvm:vm-116-disk-0' > >> > >> --- > >> Gilberto Nunes Ferreira > >> > >> (47) 3025-5907 > >> (47) 99676-7530 - Whatsapp / Telegram > >> > >> Skype: gilberto.nunes36 > >> > >> > >> > >> > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > -- > Zuzendari Teknikoa / Director T?cnico > Binovo IT Human Project, S.L. > Telf. 943569206 > Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > www.binovo.es > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gaio at sv.lnf.it Thu Feb 13 12:32:49 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Thu, 13 Feb 2020 12:32:49 +0100 Subject: [PVE-User] Network interfaces renaming strangeness... In-Reply-To: <333377406.3248453.1581591329907.JavaMail.zimbra@odiso.com> References: <20200213104747.GL3460@sv.lnf.it> <333377406.3248453.1581591329907.JavaMail.zimbra@odiso.com> Message-ID: <20200213113249.GN3460@sv.lnf.it> Mandi! Alexandre DERUMIER In chel di` si favelave... > is it a fresh proxmox6 install ? or upgraded from proxmox5? Fresh proxmox 6, from scratch. Upgraded daily via APT. > any /etc/udev/rules.d/70-persistent-net.rules file somewhere ? (should be removed) No: root at ino:~# ls -la /etc/udev/rules.d/70-persistent-net.rules ls: cannot access '/etc/udev/rules.d/70-persistent-net.rules': No such file or directory > no special grub option ? (net.ifnames, ...) No: root at ino:~# cat /etc/default/grub /etc/default/grub.d/init-select.cfg | egrep -v '^[[:space:]]*#' GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="Proxmox Virtual Environment" GRUB_CMDLINE_LINUX_DEFAULT="quiet nmi_watchdog=0 intel_iommu=on" GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" GRUB_DISABLE_OS_PROBER=true GRUB_DISABLE_RECOVERY="true" -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From elacunza at binovo.es Thu Feb 13 12:37:03 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 13 Feb 2020 12:37:03 +0100 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: Message-ID: It's quite strange, what about "ls /dev/pve/*"? El 13/2/20 a las 12:18, Gilberto Nunes escribi?: > n: Thu Feb 13 07:06:19 2020 > a2web:~# lvs > LV VG Attr LSize Pool Origin > Data% Meta% Move Log Cpy%Sync Convert > backup iscsi -wi-ao---- 1.61t > > data pve twi-aotz-- 3.34t > 88.21 9.53 > root pve -wi-ao---- 96.00g > > snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g data > vm-104-disk-0 > snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g data > vm-113-disk-0 > swap pve -wi-ao---- 8.00g > > vm-101-disk-0 pve Vwi-aotz-- 50.00g data > 24.17 > vm-102-disk-0 pve Vwi-aotz-- 500.00g data > 65.65 > vm-103-disk-0 pve Vwi-aotz-- 300.00g data > 37.28 > vm-104-disk-0 pve Vwi-aotz-- 200.00g data > 17.87 > vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g data > 35.53 > vm-105-disk-0 pve Vwi-aotz-- 700.00g data > 90.18 > vm-106-disk-0 pve Vwi-aotz-- 150.00g data > 93.55 > vm-107-disk-0 pve Vwi-aotz-- 500.00g data > 98.20 > vm-108-disk-0 pve Vwi-aotz-- 200.00g data > 98.02 > vm-109-disk-0 pve Vwi-aotz-- 100.00g data > 93.68 > vm-110-disk-0 pve Vwi-aotz-- 100.00g data > 34.55 > vm-111-disk-0 pve Vwi-aotz-- 100.00g data > 79.03 > vm-112-disk-0 pve Vwi-aotz-- 120.00g data > 93.78 > vm-113-disk-0 pve Vwi-aotz-- 50.00g data > 65.42 > vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g data > 43.64 > vm-114-disk-0 pve Vwi-aotz-- 120.00g data > 100.00 > vm-115-disk-0 pve Vwi-a-tz-- 100.00g data > 70.28 > vm-115-disk-1 pve Vwi-a-tz-- 50.00g data > 0.00 > vm-116-disk-0 pve Vwi-aotz-- 100.00g data > 26.34 > vm-117-disk-0 pve Vwi-aotz-- 100.00g data > 100.00 > vm-118-disk-0 pve Vwi-aotz-- 100.00g data > 100.00 > vm-119-disk-0 pve Vwi-aotz-- 25.00g data > 18.42 > vm-121-disk-0 pve Vwi-aotz-- 100.00g data > 23.76 > vm-122-disk-0 pve Vwi-aotz-- 100.00g data > 100.00 > vm-123-disk-0 pve Vwi-aotz-- 150.00g data > 37.89 > vm-124-disk-0 pve Vwi-aotz-- 100.00g data > 30.73 > vm-125-disk-0 pve Vwi-aotz-- 50.00g data > 9.02 > vm-126-disk-0 pve Vwi-aotz-- 30.00g data > 99.72 > vm-127-disk-0 pve Vwi-aotz-- 50.00g data > 10.79 > vm-129-disk-0 pve Vwi-aotz-- 20.00g data > 45.04 > > cat /etc/pve/storage.cfg > dir: local > path /var/lib/vz > content backup,iso,vztmpl > > lvmthin: local-lvm > thinpool data > vgname pve > content rootdir,images > > iscsi: iscsi > portal some-portal > target some-target > content images > > lvm: iscsi-lvm > vgname iscsi > base iscsi:0.0.0.scsi-mpatha > content rootdir,images > shared 1 > > dir: backup > path /backup > content images,rootdir,iso,backup > maxfiles 3 > shared 0 > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza > escreveu: > >> Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? >> >> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: >>> HI all >>> >>> Still in trouble with this issue >>> >>> cat daemon.log | grep "Feb 12 22:10" >>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication >> runner... >>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication runner. >>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 >> (qemu) >>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no >>> such volume 'local-lvm:vm-110-disk-0' >>> >>> syslog >>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 >> (qemu) >>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock backup >>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - no >>> such volume 'local-lvm:vm-110-disk-0' >>> >>> pveversion >>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) >>> >>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) >>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) >>> pve-kernel-4.15: 5.4-12 >>> pve-kernel-4.15.18-24-pve: 4.15.18-52 >>> pve-kernel-4.15.18-12-pve: 4.15.18-36 >>> corosync: 2.4.4-pve1 >>> criu: 2.11.1-1~bpo90 >>> glusterfs-client: 3.8.8-1 >>> ksm-control-daemon: 1.2-2 >>> libjs-extjs: 6.0.1-2 >>> libpve-access-control: 5.1-12 >>> libpve-apiclient-perl: 2.0-5 >>> libpve-common-perl: 5.0-56 >>> libpve-guest-common-perl: 2.0-20 >>> libpve-http-server-perl: 2.0-14 >>> libpve-storage-perl: 5.0-44 >>> libqb0: 1.0.3-1~bpo9 >>> lvm2: 2.02.168-pve6 >>> lxc-pve: 3.1.0-7 >>> lxcfs: 3.0.3-pve1 >>> novnc-pve: 1.0.0-3 >>> proxmox-widget-toolkit: 1.0-28 >>> pve-cluster: 5.0-38 >>> pve-container: 2.0-41 >>> pve-docs: 5.4-2 >>> pve-edk2-firmware: 1.20190312-1 >>> pve-firewall: 3.0-22 >>> pve-firmware: 2.0-7 >>> pve-ha-manager: 2.0-9 >>> pve-i18n: 1.1-4 >>> pve-libspice-server1: 0.14.1-2 >>> pve-qemu-kvm: 3.0.1-4 >>> pve-xtermjs: 3.12.0-1 >>> qemu-server: 5.0-55 >>> smartmontools: 6.5+svn4324-1 >>> spiceterm: 3.0-5 >>> vncterm: 1.5-3 >>> zfsutils-linux: 0.7.13-pve1~bpo2 >>> >>> >>> Some help??? Sould I upgrade the server to 6.x?? >>> >>> Thanks >>> >>> --- >>> Gilberto Nunes Ferreira >>> >>> (47) 3025-5907 >>> (47) 99676-7530 - Whatsapp / Telegram >>> >>> Skype: gilberto.nunes36 >>> >>> >>> >>> >>> >>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < >>> gilberto.nunes32 at gmail.com> escreveu: >>> >>>> Hi there >>>> >>>> I got a strage error last night. Vzdump complain about the >>>> disk no exist or lvm volume in this case but the volume exist, indeed! >>>> In the morning I have do a manually backup and it's working fine... >>>> Any advice? >>>> >>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) >>>> 112: 2020-01-29 22:20:02 INFO: status = running >>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup >>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 >>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' >> 'local-lvm:vm-112-disk-0' 120G >>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such >> volume 'local-lvm:vm-112-disk-0' >>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) >>>> 116: 2020-01-29 22:20:23 INFO: status = running >>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup >>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 >>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' >> 'local-lvm:vm-116-disk-0' 100G >>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such >> volume 'local-lvm:vm-116-disk-0' >>>> --- >>>> Gilberto Nunes Ferreira >>>> >>>> (47) 3025-5907 >>>> (47) 99676-7530 - Whatsapp / Telegram >>>> >>>> Skype: gilberto.nunes36 >>>> >>>> >>>> >>>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> -- >> Zuzendari Teknikoa / Director T?cnico >> Binovo IT Human Project, S.L. >> Telf. 943569206 >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >> www.binovo.es >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From gilberto.nunes32 at gmail.com Thu Feb 13 12:40:57 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 13 Feb 2020 08:40:57 -0300 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: Message-ID: Quite strange to say the least ls /dev/pve/* /dev/pve/root /dev/pve/vm-109-disk-0 /dev/pve/vm-118-disk-0 /dev/pve/swap /dev/pve/vm-110-disk-0 /dev/pve/vm-119-disk-0 /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 /dev/pve/vm-121-disk-0 /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 /dev/pve/vm-122-disk-0 /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 /dev/pve/vm-123-disk-0 /dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon /dev/pve/vm-124-disk-0 /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 /dev/pve/vm-125-disk-0 /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 /dev/pve/vm-126-disk-0 /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 /dev/pve/vm-127-disk-0 /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 /dev/pve/vm-129-disk-0 /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 ls /dev/mapper/ control pve-vm--104--state--LUKPLAS pve-vm--115--disk--1 iscsi-backup pve-vm--105--disk--0 pve-vm--116--disk--0 mpatha pve-vm--106--disk--0 pve-vm--117--disk--0 pve-data pve-vm--107--disk--0 pve-vm--118--disk--0 pve-data_tdata pve-vm--108--disk--0 pve-vm--119--disk--0 pve-data_tmeta pve-vm--109--disk--0 pve-vm--121--disk--0 pve-data-tpool pve-vm--110--disk--0 pve-vm--122--disk--0 pve-root pve-vm--111--disk--0 pve-vm--123--disk--0 pve-swap pve-vm--112--disk--0 pve-vm--124--disk--0 pve-vm--101--disk--0 pve-vm--113--disk--0 pve-vm--125--disk--0 pve-vm--102--disk--0 pve-vm--113--state--antes_balloon pve-vm--126--disk--0 pve-vm--103--disk--0 pve-vm--114--disk--0 pve-vm--127--disk--0 pve-vm--104--disk--0 pve-vm--115--disk--0 pve-vm--129--disk--0 --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza escreveu: > It's quite strange, what about "ls /dev/pve/*"? > > El 13/2/20 a las 12:18, Gilberto Nunes escribi?: > > n: Thu Feb 13 07:06:19 2020 > > a2web:~# lvs > > LV VG Attr LSize Pool Origin > > Data% Meta% Move Log Cpy%Sync Convert > > backup iscsi -wi-ao---- 1.61t > > > > data pve twi-aotz-- 3.34t > > 88.21 9.53 > > root pve -wi-ao---- 96.00g > > > > snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g data > > vm-104-disk-0 > > snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g data > > vm-113-disk-0 > > swap pve -wi-ao---- 8.00g > > > > vm-101-disk-0 pve Vwi-aotz-- 50.00g data > > 24.17 > > vm-102-disk-0 pve Vwi-aotz-- 500.00g data > > 65.65 > > vm-103-disk-0 pve Vwi-aotz-- 300.00g data > > 37.28 > > vm-104-disk-0 pve Vwi-aotz-- 200.00g data > > 17.87 > > vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g data > > 35.53 > > vm-105-disk-0 pve Vwi-aotz-- 700.00g data > > 90.18 > > vm-106-disk-0 pve Vwi-aotz-- 150.00g data > > 93.55 > > vm-107-disk-0 pve Vwi-aotz-- 500.00g data > > 98.20 > > vm-108-disk-0 pve Vwi-aotz-- 200.00g data > > 98.02 > > vm-109-disk-0 pve Vwi-aotz-- 100.00g data > > 93.68 > > vm-110-disk-0 pve Vwi-aotz-- 100.00g data > > 34.55 > > vm-111-disk-0 pve Vwi-aotz-- 100.00g data > > 79.03 > > vm-112-disk-0 pve Vwi-aotz-- 120.00g data > > 93.78 > > vm-113-disk-0 pve Vwi-aotz-- 50.00g data > > 65.42 > > vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g data > > 43.64 > > vm-114-disk-0 pve Vwi-aotz-- 120.00g data > > 100.00 > > vm-115-disk-0 pve Vwi-a-tz-- 100.00g data > > 70.28 > > vm-115-disk-1 pve Vwi-a-tz-- 50.00g data > > 0.00 > > vm-116-disk-0 pve Vwi-aotz-- 100.00g data > > 26.34 > > vm-117-disk-0 pve Vwi-aotz-- 100.00g data > > 100.00 > > vm-118-disk-0 pve Vwi-aotz-- 100.00g data > > 100.00 > > vm-119-disk-0 pve Vwi-aotz-- 25.00g data > > 18.42 > > vm-121-disk-0 pve Vwi-aotz-- 100.00g data > > 23.76 > > vm-122-disk-0 pve Vwi-aotz-- 100.00g data > > 100.00 > > vm-123-disk-0 pve Vwi-aotz-- 150.00g data > > 37.89 > > vm-124-disk-0 pve Vwi-aotz-- 100.00g data > > 30.73 > > vm-125-disk-0 pve Vwi-aotz-- 50.00g data > > 9.02 > > vm-126-disk-0 pve Vwi-aotz-- 30.00g data > > 99.72 > > vm-127-disk-0 pve Vwi-aotz-- 50.00g data > > 10.79 > > vm-129-disk-0 pve Vwi-aotz-- 20.00g data > > 45.04 > > > > cat /etc/pve/storage.cfg > > dir: local > > path /var/lib/vz > > content backup,iso,vztmpl > > > > lvmthin: local-lvm > > thinpool data > > vgname pve > > content rootdir,images > > > > iscsi: iscsi > > portal some-portal > > target some-target > > content images > > > > lvm: iscsi-lvm > > vgname iscsi > > base iscsi:0.0.0.scsi-mpatha > > content rootdir,images > > shared 1 > > > > dir: backup > > path /backup > > content images,rootdir,iso,backup > > maxfiles 3 > > shared 0 > > --- > > Gilberto Nunes Ferreira > > > > (47) 3025-5907 > > (47) 99676-7530 - Whatsapp / Telegram > > > > Skype: gilberto.nunes36 > > > > > > > > > > > > Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza > > escreveu: > > > >> Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? > >> > >> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: > >>> HI all > >>> > >>> Still in trouble with this issue > >>> > >>> cat daemon.log | grep "Feb 12 22:10" > >>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication > >> runner... > >>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication > runner. > >>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 > >> (qemu) > >>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - > no > >>> such volume 'local-lvm:vm-110-disk-0' > >>> > >>> syslog > >>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 > >> (qemu) > >>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock > backup > >>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - > no > >>> such volume 'local-lvm:vm-110-disk-0' > >>> > >>> pveversion > >>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) > >>> > >>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) > >>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) > >>> pve-kernel-4.15: 5.4-12 > >>> pve-kernel-4.15.18-24-pve: 4.15.18-52 > >>> pve-kernel-4.15.18-12-pve: 4.15.18-36 > >>> corosync: 2.4.4-pve1 > >>> criu: 2.11.1-1~bpo90 > >>> glusterfs-client: 3.8.8-1 > >>> ksm-control-daemon: 1.2-2 > >>> libjs-extjs: 6.0.1-2 > >>> libpve-access-control: 5.1-12 > >>> libpve-apiclient-perl: 2.0-5 > >>> libpve-common-perl: 5.0-56 > >>> libpve-guest-common-perl: 2.0-20 > >>> libpve-http-server-perl: 2.0-14 > >>> libpve-storage-perl: 5.0-44 > >>> libqb0: 1.0.3-1~bpo9 > >>> lvm2: 2.02.168-pve6 > >>> lxc-pve: 3.1.0-7 > >>> lxcfs: 3.0.3-pve1 > >>> novnc-pve: 1.0.0-3 > >>> proxmox-widget-toolkit: 1.0-28 > >>> pve-cluster: 5.0-38 > >>> pve-container: 2.0-41 > >>> pve-docs: 5.4-2 > >>> pve-edk2-firmware: 1.20190312-1 > >>> pve-firewall: 3.0-22 > >>> pve-firmware: 2.0-7 > >>> pve-ha-manager: 2.0-9 > >>> pve-i18n: 1.1-4 > >>> pve-libspice-server1: 0.14.1-2 > >>> pve-qemu-kvm: 3.0.1-4 > >>> pve-xtermjs: 3.12.0-1 > >>> qemu-server: 5.0-55 > >>> smartmontools: 6.5+svn4324-1 > >>> spiceterm: 3.0-5 > >>> vncterm: 1.5-3 > >>> zfsutils-linux: 0.7.13-pve1~bpo2 > >>> > >>> > >>> Some help??? Sould I upgrade the server to 6.x?? > >>> > >>> Thanks > >>> > >>> --- > >>> Gilberto Nunes Ferreira > >>> > >>> (47) 3025-5907 > >>> (47) 99676-7530 - Whatsapp / Telegram > >>> > >>> Skype: gilberto.nunes36 > >>> > >>> > >>> > >>> > >>> > >>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < > >>> gilberto.nunes32 at gmail.com> escreveu: > >>> > >>>> Hi there > >>>> > >>>> I got a strage error last night. Vzdump complain about the > >>>> disk no exist or lvm volume in this case but the volume exist, indeed! > >>>> In the morning I have do a manually backup and it's working fine... > >>>> Any advice? > >>>> > >>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) > >>>> 112: 2020-01-29 22:20:02 INFO: status = running > >>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup > >>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 > >>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' > >> 'local-lvm:vm-112-disk-0' 120G > >>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such > >> volume 'local-lvm:vm-112-disk-0' > >>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) > >>>> 116: 2020-01-29 22:20:23 INFO: status = running > >>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup > >>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 > >>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' > >> 'local-lvm:vm-116-disk-0' 100G > >>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such > >> volume 'local-lvm:vm-116-disk-0' > >>>> --- > >>>> Gilberto Nunes Ferreira > >>>> > >>>> (47) 3025-5907 > >>>> (47) 99676-7530 - Whatsapp / Telegram > >>>> > >>>> Skype: gilberto.nunes36 > >>>> > >>>> > >>>> > >>>> > >>> _______________________________________________ > >>> pve-user mailing list > >>> pve-user at pve.proxmox.com > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > >> -- > >> Zuzendari Teknikoa / Director T?cnico > >> Binovo IT Human Project, S.L. > >> Telf. 943569206 > >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >> www.binovo.es > >> > >> _______________________________________________ > >> pve-user mailing list > >> pve-user at pve.proxmox.com > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > -- > Zuzendari Teknikoa / Director T?cnico > Binovo IT Human Project, S.L. > Telf. 943569206 > Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > www.binovo.es > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From aderumier at odiso.com Thu Feb 13 12:54:40 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Thu, 13 Feb 2020 12:54:40 +0100 (CET) Subject: [PVE-User] Network interfaces renaming strangeness... In-Reply-To: <20200213113249.GN3460@sv.lnf.it> References: <20200213104747.GL3460@sv.lnf.it> <333377406.3248453.1581591329907.JavaMail.zimbra@odiso.com> <20200213113249.GN3460@sv.lnf.it> Message-ID: <2057311173.3250195.1581594880028.JavaMail.zimbra@odiso.com> I wonder if it could be because both are detected on same slot ID_NET_NAME_SLOT=ens1 can you try to edit: /usr/lib/systemd/network/99-default.link "NamePolicy=keep kernel database onboard slot path" and reverse slot/path "NamePolicy=keep kernel database onboard path slot" and reboot I have see a user on the forum with same problem ----- Mail original ----- De: "Marco Gaiarin" ?: "proxmoxve" Envoy?: Jeudi 13 F?vrier 2020 12:32:49 Objet: Re: [PVE-User] Network interfaces renaming strangeness... Mandi! Alexandre DERUMIER In chel di` si favelave... > is it a fresh proxmox6 install ? or upgraded from proxmox5? Fresh proxmox 6, from scratch. Upgraded daily via APT. > any /etc/udev/rules.d/70-persistent-net.rules file somewhere ? (should be removed) No: root at ino:~# ls -la /etc/udev/rules.d/70-persistent-net.rules ls: cannot access '/etc/udev/rules.d/70-persistent-net.rules': No such file or directory > no special grub option ? (net.ifnames, ...) No: root at ino:~# cat /etc/default/grub /etc/default/grub.d/init-select.cfg | egrep -v '^[[:space:]]*#' GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="Proxmox Virtual Environment" GRUB_CMDLINE_LINUX_DEFAULT="quiet nmi_watchdog=0 intel_iommu=on" GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" GRUB_DISABLE_OS_PROBER=true GRUB_DISABLE_RECOVERY="true" -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From aderumier at odiso.com Thu Feb 13 13:06:09 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Thu, 13 Feb 2020 13:06:09 +0100 (CET) Subject: [PVE-User] Network interfaces renaming strangeness... In-Reply-To: <2057311173.3250195.1581594880028.JavaMail.zimbra@odiso.com> References: <20200213104747.GL3460@sv.lnf.it> <333377406.3248453.1581591329907.JavaMail.zimbra@odiso.com> <20200213113249.GN3460@sv.lnf.it> <2057311173.3250195.1581594880028.JavaMail.zimbra@odiso.com> Message-ID: <1495478978.3250545.1581595569080.JavaMail.zimbra@odiso.com> Also, what is the physical setup ? is it a server ? are the 2 cards ,dedicated pci card ? (or maybe 1 in embedded in the server ?) Maybe are they plugged to a riser/pci extension ? ----- Mail original ----- De: "aderumier" ?: "proxmoxve" Envoy?: Jeudi 13 F?vrier 2020 12:54:40 Objet: Re: [PVE-User] Network interfaces renaming strangeness... I wonder if it could be because both are detected on same slot ID_NET_NAME_SLOT=ens1 can you try to edit: /usr/lib/systemd/network/99-default.link "NamePolicy=keep kernel database onboard slot path" and reverse slot/path "NamePolicy=keep kernel database onboard path slot" and reboot I have see a user on the forum with same problem ----- Mail original ----- De: "Marco Gaiarin" ?: "proxmoxve" Envoy?: Jeudi 13 F?vrier 2020 12:32:49 Objet: Re: [PVE-User] Network interfaces renaming strangeness... Mandi! Alexandre DERUMIER In chel di` si favelave... > is it a fresh proxmox6 install ? or upgraded from proxmox5? Fresh proxmox 6, from scratch. Upgraded daily via APT. > any /etc/udev/rules.d/70-persistent-net.rules file somewhere ? (should be removed) No: root at ino:~# ls -la /etc/udev/rules.d/70-persistent-net.rules ls: cannot access '/etc/udev/rules.d/70-persistent-net.rules': No such file or directory > no special grub option ? (net.ifnames, ...) No: root at ino:~# cat /etc/default/grub /etc/default/grub.d/init-select.cfg | egrep -v '^[[:space:]]*#' GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="Proxmox Virtual Environment" GRUB_CMDLINE_LINUX_DEFAULT="quiet nmi_watchdog=0 intel_iommu=on" GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" GRUB_DISABLE_OS_PROBER=true GRUB_DISABLE_RECOVERY="true" -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From aderumier at odiso.com Thu Feb 13 13:17:48 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Thu, 13 Feb 2020 13:17:48 +0100 (CET) Subject: [PVE-User] Network interfaces renaming strangeness... In-Reply-To: <1495478978.3250545.1581595569080.JavaMail.zimbra@odiso.com> References: <20200213104747.GL3460@sv.lnf.it> <333377406.3248453.1581591329907.JavaMail.zimbra@odiso.com> <20200213113249.GN3460@sv.lnf.it> <2057311173.3250195.1581594880028.JavaMail.zimbra@odiso.com> <1495478978.3250545.1581595569080.JavaMail.zimbra@odiso.com> Message-ID: <1399150417.3250624.1581596268756.JavaMail.zimbra@odiso.com> also found on systemd github https://github.com/systemd/systemd/issues/13788 it could be a firmware bug in e1000, exposing the wrong slot ----- Mail original ----- De: "aderumier" ?: "proxmoxve" Envoy?: Jeudi 13 F?vrier 2020 13:06:09 Objet: Re: [PVE-User] Network interfaces renaming strangeness... Also, what is the physical setup ? is it a server ? are the 2 cards ,dedicated pci card ? (or maybe 1 in embedded in the server ?) Maybe are they plugged to a riser/pci extension ? ----- Mail original ----- De: "aderumier" ?: "proxmoxve" Envoy?: Jeudi 13 F?vrier 2020 12:54:40 Objet: Re: [PVE-User] Network interfaces renaming strangeness... I wonder if it could be because both are detected on same slot ID_NET_NAME_SLOT=ens1 can you try to edit: /usr/lib/systemd/network/99-default.link "NamePolicy=keep kernel database onboard slot path" and reverse slot/path "NamePolicy=keep kernel database onboard path slot" and reboot I have see a user on the forum with same problem ----- Mail original ----- De: "Marco Gaiarin" ?: "proxmoxve" Envoy?: Jeudi 13 F?vrier 2020 12:32:49 Objet: Re: [PVE-User] Network interfaces renaming strangeness... Mandi! Alexandre DERUMIER In chel di` si favelave... > is it a fresh proxmox6 install ? or upgraded from proxmox5? Fresh proxmox 6, from scratch. Upgraded daily via APT. > any /etc/udev/rules.d/70-persistent-net.rules file somewhere ? (should be removed) No: root at ino:~# ls -la /etc/udev/rules.d/70-persistent-net.rules ls: cannot access '/etc/udev/rules.d/70-persistent-net.rules': No such file or directory > no special grub option ? (net.ifnames, ...) No: root at ino:~# cat /etc/default/grub /etc/default/grub.d/init-select.cfg | egrep -v '^[[:space:]]*#' GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="Proxmox Virtual Environment" GRUB_CMDLINE_LINUX_DEFAULT="quiet nmi_watchdog=0 intel_iommu=on" GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" GRUB_DISABLE_OS_PROBER=true GRUB_DISABLE_RECOVERY="true" -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From elacunza at binovo.es Thu Feb 13 13:18:48 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 13 Feb 2020 13:18:48 +0100 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: Message-ID: What about: pvesm list local-lvm ls -l /dev/pve/vm-110-disk-0 El 13/2/20 a las 12:40, Gilberto Nunes escribi?: > Quite strange to say the least > > > ls /dev/pve/* > /dev/pve/root /dev/pve/vm-109-disk-0 > /dev/pve/vm-118-disk-0 > /dev/pve/swap /dev/pve/vm-110-disk-0 > /dev/pve/vm-119-disk-0 > /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 > /dev/pve/vm-121-disk-0 > /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 > /dev/pve/vm-122-disk-0 > /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 > /dev/pve/vm-123-disk-0 > /dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon > /dev/pve/vm-124-disk-0 > /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 > /dev/pve/vm-125-disk-0 > /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 > /dev/pve/vm-126-disk-0 > /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 > /dev/pve/vm-127-disk-0 > /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 > /dev/pve/vm-129-disk-0 > /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 > > ls /dev/mapper/ > control pve-vm--104--state--LUKPLAS > pve-vm--115--disk--1 > iscsi-backup pve-vm--105--disk--0 > pve-vm--116--disk--0 > mpatha pve-vm--106--disk--0 > pve-vm--117--disk--0 > pve-data pve-vm--107--disk--0 > pve-vm--118--disk--0 > pve-data_tdata pve-vm--108--disk--0 > pve-vm--119--disk--0 > pve-data_tmeta pve-vm--109--disk--0 > pve-vm--121--disk--0 > pve-data-tpool pve-vm--110--disk--0 > pve-vm--122--disk--0 > pve-root pve-vm--111--disk--0 > pve-vm--123--disk--0 > pve-swap pve-vm--112--disk--0 > pve-vm--124--disk--0 > pve-vm--101--disk--0 pve-vm--113--disk--0 > pve-vm--125--disk--0 > pve-vm--102--disk--0 pve-vm--113--state--antes_balloon > pve-vm--126--disk--0 > pve-vm--103--disk--0 pve-vm--114--disk--0 > pve-vm--127--disk--0 > pve-vm--104--disk--0 pve-vm--115--disk--0 > pve-vm--129--disk--0 > > > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza > escreveu: > >> It's quite strange, what about "ls /dev/pve/*"? >> >> El 13/2/20 a las 12:18, Gilberto Nunes escribi?: >>> n: Thu Feb 13 07:06:19 2020 >>> a2web:~# lvs >>> LV VG Attr LSize Pool Origin >>> Data% Meta% Move Log Cpy%Sync Convert >>> backup iscsi -wi-ao---- 1.61t >>> >>> data pve twi-aotz-- 3.34t >>> 88.21 9.53 >>> root pve -wi-ao---- 96.00g >>> >>> snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g data >>> vm-104-disk-0 >>> snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g data >>> vm-113-disk-0 >>> swap pve -wi-ao---- 8.00g >>> >>> vm-101-disk-0 pve Vwi-aotz-- 50.00g data >>> 24.17 >>> vm-102-disk-0 pve Vwi-aotz-- 500.00g data >>> 65.65 >>> vm-103-disk-0 pve Vwi-aotz-- 300.00g data >>> 37.28 >>> vm-104-disk-0 pve Vwi-aotz-- 200.00g data >>> 17.87 >>> vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g data >>> 35.53 >>> vm-105-disk-0 pve Vwi-aotz-- 700.00g data >>> 90.18 >>> vm-106-disk-0 pve Vwi-aotz-- 150.00g data >>> 93.55 >>> vm-107-disk-0 pve Vwi-aotz-- 500.00g data >>> 98.20 >>> vm-108-disk-0 pve Vwi-aotz-- 200.00g data >>> 98.02 >>> vm-109-disk-0 pve Vwi-aotz-- 100.00g data >>> 93.68 >>> vm-110-disk-0 pve Vwi-aotz-- 100.00g data >>> 34.55 >>> vm-111-disk-0 pve Vwi-aotz-- 100.00g data >>> 79.03 >>> vm-112-disk-0 pve Vwi-aotz-- 120.00g data >>> 93.78 >>> vm-113-disk-0 pve Vwi-aotz-- 50.00g data >>> 65.42 >>> vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g data >>> 43.64 >>> vm-114-disk-0 pve Vwi-aotz-- 120.00g data >>> 100.00 >>> vm-115-disk-0 pve Vwi-a-tz-- 100.00g data >>> 70.28 >>> vm-115-disk-1 pve Vwi-a-tz-- 50.00g data >>> 0.00 >>> vm-116-disk-0 pve Vwi-aotz-- 100.00g data >>> 26.34 >>> vm-117-disk-0 pve Vwi-aotz-- 100.00g data >>> 100.00 >>> vm-118-disk-0 pve Vwi-aotz-- 100.00g data >>> 100.00 >>> vm-119-disk-0 pve Vwi-aotz-- 25.00g data >>> 18.42 >>> vm-121-disk-0 pve Vwi-aotz-- 100.00g data >>> 23.76 >>> vm-122-disk-0 pve Vwi-aotz-- 100.00g data >>> 100.00 >>> vm-123-disk-0 pve Vwi-aotz-- 150.00g data >>> 37.89 >>> vm-124-disk-0 pve Vwi-aotz-- 100.00g data >>> 30.73 >>> vm-125-disk-0 pve Vwi-aotz-- 50.00g data >>> 9.02 >>> vm-126-disk-0 pve Vwi-aotz-- 30.00g data >>> 99.72 >>> vm-127-disk-0 pve Vwi-aotz-- 50.00g data >>> 10.79 >>> vm-129-disk-0 pve Vwi-aotz-- 20.00g data >>> 45.04 >>> >>> cat /etc/pve/storage.cfg >>> dir: local >>> path /var/lib/vz >>> content backup,iso,vztmpl >>> >>> lvmthin: local-lvm >>> thinpool data >>> vgname pve >>> content rootdir,images >>> >>> iscsi: iscsi >>> portal some-portal >>> target some-target >>> content images >>> >>> lvm: iscsi-lvm >>> vgname iscsi >>> base iscsi:0.0.0.scsi-mpatha >>> content rootdir,images >>> shared 1 >>> >>> dir: backup >>> path /backup >>> content images,rootdir,iso,backup >>> maxfiles 3 >>> shared 0 >>> --- >>> Gilberto Nunes Ferreira >>> >>> (47) 3025-5907 >>> (47) 99676-7530 - Whatsapp / Telegram >>> >>> Skype: gilberto.nunes36 >>> >>> >>> >>> >>> >>> Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza >>> escreveu: >>> >>>> Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? >>>> >>>> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: >>>>> HI all >>>>> >>>>> Still in trouble with this issue >>>>> >>>>> cat daemon.log | grep "Feb 12 22:10" >>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication >>>> runner... >>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication >> runner. >>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 >>>> (qemu) >>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - >> no >>>>> such volume 'local-lvm:vm-110-disk-0' >>>>> >>>>> syslog >>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 >>>> (qemu) >>>>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock >> backup >>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - >> no >>>>> such volume 'local-lvm:vm-110-disk-0' >>>>> >>>>> pveversion >>>>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) >>>>> >>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) >>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) >>>>> pve-kernel-4.15: 5.4-12 >>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52 >>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36 >>>>> corosync: 2.4.4-pve1 >>>>> criu: 2.11.1-1~bpo90 >>>>> glusterfs-client: 3.8.8-1 >>>>> ksm-control-daemon: 1.2-2 >>>>> libjs-extjs: 6.0.1-2 >>>>> libpve-access-control: 5.1-12 >>>>> libpve-apiclient-perl: 2.0-5 >>>>> libpve-common-perl: 5.0-56 >>>>> libpve-guest-common-perl: 2.0-20 >>>>> libpve-http-server-perl: 2.0-14 >>>>> libpve-storage-perl: 5.0-44 >>>>> libqb0: 1.0.3-1~bpo9 >>>>> lvm2: 2.02.168-pve6 >>>>> lxc-pve: 3.1.0-7 >>>>> lxcfs: 3.0.3-pve1 >>>>> novnc-pve: 1.0.0-3 >>>>> proxmox-widget-toolkit: 1.0-28 >>>>> pve-cluster: 5.0-38 >>>>> pve-container: 2.0-41 >>>>> pve-docs: 5.4-2 >>>>> pve-edk2-firmware: 1.20190312-1 >>>>> pve-firewall: 3.0-22 >>>>> pve-firmware: 2.0-7 >>>>> pve-ha-manager: 2.0-9 >>>>> pve-i18n: 1.1-4 >>>>> pve-libspice-server1: 0.14.1-2 >>>>> pve-qemu-kvm: 3.0.1-4 >>>>> pve-xtermjs: 3.12.0-1 >>>>> qemu-server: 5.0-55 >>>>> smartmontools: 6.5+svn4324-1 >>>>> spiceterm: 3.0-5 >>>>> vncterm: 1.5-3 >>>>> zfsutils-linux: 0.7.13-pve1~bpo2 >>>>> >>>>> >>>>> Some help??? Sould I upgrade the server to 6.x?? >>>>> >>>>> Thanks >>>>> >>>>> --- >>>>> Gilberto Nunes Ferreira >>>>> >>>>> (47) 3025-5907 >>>>> (47) 99676-7530 - Whatsapp / Telegram >>>>> >>>>> Skype: gilberto.nunes36 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < >>>>> gilberto.nunes32 at gmail.com> escreveu: >>>>> >>>>>> Hi there >>>>>> >>>>>> I got a strage error last night. Vzdump complain about the >>>>>> disk no exist or lvm volume in this case but the volume exist, indeed! >>>>>> In the morning I have do a manually backup and it's working fine... >>>>>> Any advice? >>>>>> >>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) >>>>>> 112: 2020-01-29 22:20:02 INFO: status = running >>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup >>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 >>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' >>>> 'local-lvm:vm-112-disk-0' 120G >>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such >>>> volume 'local-lvm:vm-112-disk-0' >>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) >>>>>> 116: 2020-01-29 22:20:23 INFO: status = running >>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup >>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 >>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' >>>> 'local-lvm:vm-116-disk-0' 100G >>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such >>>> volume 'local-lvm:vm-116-disk-0' >>>>>> --- >>>>>> Gilberto Nunes Ferreira >>>>>> >>>>>> (47) 3025-5907 >>>>>> (47) 99676-7530 - Whatsapp / Telegram >>>>>> >>>>>> Skype: gilberto.nunes36 >>>>>> >>>>>> >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> pve-user mailing list >>>>> pve-user at pve.proxmox.com >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> -- >>>> Zuzendari Teknikoa / Director T?cnico >>>> Binovo IT Human Project, S.L. >>>> Telf. 943569206 >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >>>> www.binovo.es >>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at pve.proxmox.com >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> -- >> Zuzendari Teknikoa / Director T?cnico >> Binovo IT Human Project, S.L. >> Telf. 943569206 >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >> www.binovo.es >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From gilberto.nunes32 at gmail.com Thu Feb 13 13:24:24 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 13 Feb 2020 09:24:24 -0300 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: Message-ID: I can assure you... the disk is there! pvesm list local-lvm local-lvm:vm-101-disk-0 raw 53687091200 101 local-lvm:vm-102-disk-0 raw 536870912000 102 local-lvm:vm-103-disk-0 raw 322122547200 103 local-lvm:vm-104-disk-0 raw 214748364800 104 local-lvm:vm-104-state-LUKPLAS raw 17704157184 104 local-lvm:vm-105-disk-0 raw 751619276800 105 local-lvm:vm-106-disk-0 raw 161061273600 106 local-lvm:vm-107-disk-0 raw 536870912000 107 local-lvm:vm-108-disk-0 raw 214748364800 108 local-lvm:vm-109-disk-0 raw 107374182400 109 local-lvm:vm-110-disk-0 raw 107374182400 110 local-lvm:vm-111-disk-0 raw 107374182400 111 local-lvm:vm-112-disk-0 raw 128849018880 112 local-lvm:vm-113-disk-0 raw 53687091200 113 local-lvm:vm-113-state-antes_balloon raw 17704157184 113 local-lvm:vm-114-disk-0 raw 128849018880 114 local-lvm:vm-115-disk-0 raw 107374182400 115 local-lvm:vm-115-disk-1 raw 53687091200 115 local-lvm:vm-116-disk-0 raw 107374182400 116 local-lvm:vm-117-disk-0 raw 107374182400 117 local-lvm:vm-118-disk-0 raw 107374182400 118 local-lvm:vm-119-disk-0 raw 26843545600 119 local-lvm:vm-121-disk-0 raw 107374182400 121 local-lvm:vm-122-disk-0 raw 107374182400 122 local-lvm:vm-123-disk-0 raw 161061273600 123 local-lvm:vm-124-disk-0 raw 107374182400 124 local-lvm:vm-125-disk-0 raw 53687091200 125 local-lvm:vm-126-disk-0 raw 32212254720 126 local-lvm:vm-127-disk-0 raw 53687091200 127 local-lvm:vm-129-disk-0 raw 21474836480 129 ls -l /dev/pve/vm-110-disk-0 lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> ../dm-15 --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 13 de fev. de 2020 ?s 09:19, Eneko Lacunza escreveu: > What about: > > pvesm list local-lvm > ls -l /dev/pve/vm-110-disk-0 > > El 13/2/20 a las 12:40, Gilberto Nunes escribi?: > > Quite strange to say the least > > > > > > ls /dev/pve/* > > /dev/pve/root /dev/pve/vm-109-disk-0 > > /dev/pve/vm-118-disk-0 > > /dev/pve/swap /dev/pve/vm-110-disk-0 > > /dev/pve/vm-119-disk-0 > > /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 > > /dev/pve/vm-121-disk-0 > > /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 > > /dev/pve/vm-122-disk-0 > > /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 > > /dev/pve/vm-123-disk-0 > > /dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon > > /dev/pve/vm-124-disk-0 > > /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 > > /dev/pve/vm-125-disk-0 > > /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 > > /dev/pve/vm-126-disk-0 > > /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 > > /dev/pve/vm-127-disk-0 > > /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 > > /dev/pve/vm-129-disk-0 > > /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 > > > > ls /dev/mapper/ > > control pve-vm--104--state--LUKPLAS > > pve-vm--115--disk--1 > > iscsi-backup pve-vm--105--disk--0 > > pve-vm--116--disk--0 > > mpatha pve-vm--106--disk--0 > > pve-vm--117--disk--0 > > pve-data pve-vm--107--disk--0 > > pve-vm--118--disk--0 > > pve-data_tdata pve-vm--108--disk--0 > > pve-vm--119--disk--0 > > pve-data_tmeta pve-vm--109--disk--0 > > pve-vm--121--disk--0 > > pve-data-tpool pve-vm--110--disk--0 > > pve-vm--122--disk--0 > > pve-root pve-vm--111--disk--0 > > pve-vm--123--disk--0 > > pve-swap pve-vm--112--disk--0 > > pve-vm--124--disk--0 > > pve-vm--101--disk--0 pve-vm--113--disk--0 > > pve-vm--125--disk--0 > > pve-vm--102--disk--0 pve-vm--113--state--antes_balloon > > pve-vm--126--disk--0 > > pve-vm--103--disk--0 pve-vm--114--disk--0 > > pve-vm--127--disk--0 > > pve-vm--104--disk--0 pve-vm--115--disk--0 > > pve-vm--129--disk--0 > > > > > > --- > > Gilberto Nunes Ferreira > > > > (47) 3025-5907 > > (47) 99676-7530 - Whatsapp / Telegram > > > > Skype: gilberto.nunes36 > > > > > > > > > > > > Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza > > escreveu: > > > >> It's quite strange, what about "ls /dev/pve/*"? > >> > >> El 13/2/20 a las 12:18, Gilberto Nunes escribi?: > >>> n: Thu Feb 13 07:06:19 2020 > >>> a2web:~# lvs > >>> LV VG Attr LSize Pool > Origin > >>> Data% Meta% Move Log Cpy%Sync Convert > >>> backup iscsi -wi-ao---- 1.61t > >>> > >>> data pve twi-aotz-- 3.34t > >>> 88.21 9.53 > >>> root pve -wi-ao---- 96.00g > >>> > >>> snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g data > >>> vm-104-disk-0 > >>> snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g data > >>> vm-113-disk-0 > >>> swap pve -wi-ao---- 8.00g > >>> > >>> vm-101-disk-0 pve Vwi-aotz-- 50.00g data > >>> 24.17 > >>> vm-102-disk-0 pve Vwi-aotz-- 500.00g data > >>> 65.65 > >>> vm-103-disk-0 pve Vwi-aotz-- 300.00g data > >>> 37.28 > >>> vm-104-disk-0 pve Vwi-aotz-- 200.00g data > >>> 17.87 > >>> vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g data > >>> 35.53 > >>> vm-105-disk-0 pve Vwi-aotz-- 700.00g data > >>> 90.18 > >>> vm-106-disk-0 pve Vwi-aotz-- 150.00g data > >>> 93.55 > >>> vm-107-disk-0 pve Vwi-aotz-- 500.00g data > >>> 98.20 > >>> vm-108-disk-0 pve Vwi-aotz-- 200.00g data > >>> 98.02 > >>> vm-109-disk-0 pve Vwi-aotz-- 100.00g data > >>> 93.68 > >>> vm-110-disk-0 pve Vwi-aotz-- 100.00g data > >>> 34.55 > >>> vm-111-disk-0 pve Vwi-aotz-- 100.00g data > >>> 79.03 > >>> vm-112-disk-0 pve Vwi-aotz-- 120.00g data > >>> 93.78 > >>> vm-113-disk-0 pve Vwi-aotz-- 50.00g data > >>> 65.42 > >>> vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g data > >>> 43.64 > >>> vm-114-disk-0 pve Vwi-aotz-- 120.00g data > >>> 100.00 > >>> vm-115-disk-0 pve Vwi-a-tz-- 100.00g data > >>> 70.28 > >>> vm-115-disk-1 pve Vwi-a-tz-- 50.00g data > >>> 0.00 > >>> vm-116-disk-0 pve Vwi-aotz-- 100.00g data > >>> 26.34 > >>> vm-117-disk-0 pve Vwi-aotz-- 100.00g data > >>> 100.00 > >>> vm-118-disk-0 pve Vwi-aotz-- 100.00g data > >>> 100.00 > >>> vm-119-disk-0 pve Vwi-aotz-- 25.00g data > >>> 18.42 > >>> vm-121-disk-0 pve Vwi-aotz-- 100.00g data > >>> 23.76 > >>> vm-122-disk-0 pve Vwi-aotz-- 100.00g data > >>> 100.00 > >>> vm-123-disk-0 pve Vwi-aotz-- 150.00g data > >>> 37.89 > >>> vm-124-disk-0 pve Vwi-aotz-- 100.00g data > >>> 30.73 > >>> vm-125-disk-0 pve Vwi-aotz-- 50.00g data > >>> 9.02 > >>> vm-126-disk-0 pve Vwi-aotz-- 30.00g data > >>> 99.72 > >>> vm-127-disk-0 pve Vwi-aotz-- 50.00g data > >>> 10.79 > >>> vm-129-disk-0 pve Vwi-aotz-- 20.00g data > >>> 45.04 > >>> > >>> cat /etc/pve/storage.cfg > >>> dir: local > >>> path /var/lib/vz > >>> content backup,iso,vztmpl > >>> > >>> lvmthin: local-lvm > >>> thinpool data > >>> vgname pve > >>> content rootdir,images > >>> > >>> iscsi: iscsi > >>> portal some-portal > >>> target some-target > >>> content images > >>> > >>> lvm: iscsi-lvm > >>> vgname iscsi > >>> base iscsi:0.0.0.scsi-mpatha > >>> content rootdir,images > >>> shared 1 > >>> > >>> dir: backup > >>> path /backup > >>> content images,rootdir,iso,backup > >>> maxfiles 3 > >>> shared 0 > >>> --- > >>> Gilberto Nunes Ferreira > >>> > >>> (47) 3025-5907 > >>> (47) 99676-7530 - Whatsapp / Telegram > >>> > >>> Skype: gilberto.nunes36 > >>> > >>> > >>> > >>> > >>> > >>> Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza < > elacunza at binovo.es> > >>> escreveu: > >>> > >>>> Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? > >>>> > >>>> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: > >>>>> HI all > >>>>> > >>>>> Still in trouble with this issue > >>>>> > >>>>> cat daemon.log | grep "Feb 12 22:10" > >>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication > >>>> runner... > >>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication > >> runner. > >>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 > >>>> (qemu) > >>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - > >> no > >>>>> such volume 'local-lvm:vm-110-disk-0' > >>>>> > >>>>> syslog > >>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 > >>>> (qemu) > >>>>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock > >> backup > >>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - > >> no > >>>>> such volume 'local-lvm:vm-110-disk-0' > >>>>> > >>>>> pveversion > >>>>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) > >>>>> > >>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) > >>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) > >>>>> pve-kernel-4.15: 5.4-12 > >>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52 > >>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36 > >>>>> corosync: 2.4.4-pve1 > >>>>> criu: 2.11.1-1~bpo90 > >>>>> glusterfs-client: 3.8.8-1 > >>>>> ksm-control-daemon: 1.2-2 > >>>>> libjs-extjs: 6.0.1-2 > >>>>> libpve-access-control: 5.1-12 > >>>>> libpve-apiclient-perl: 2.0-5 > >>>>> libpve-common-perl: 5.0-56 > >>>>> libpve-guest-common-perl: 2.0-20 > >>>>> libpve-http-server-perl: 2.0-14 > >>>>> libpve-storage-perl: 5.0-44 > >>>>> libqb0: 1.0.3-1~bpo9 > >>>>> lvm2: 2.02.168-pve6 > >>>>> lxc-pve: 3.1.0-7 > >>>>> lxcfs: 3.0.3-pve1 > >>>>> novnc-pve: 1.0.0-3 > >>>>> proxmox-widget-toolkit: 1.0-28 > >>>>> pve-cluster: 5.0-38 > >>>>> pve-container: 2.0-41 > >>>>> pve-docs: 5.4-2 > >>>>> pve-edk2-firmware: 1.20190312-1 > >>>>> pve-firewall: 3.0-22 > >>>>> pve-firmware: 2.0-7 > >>>>> pve-ha-manager: 2.0-9 > >>>>> pve-i18n: 1.1-4 > >>>>> pve-libspice-server1: 0.14.1-2 > >>>>> pve-qemu-kvm: 3.0.1-4 > >>>>> pve-xtermjs: 3.12.0-1 > >>>>> qemu-server: 5.0-55 > >>>>> smartmontools: 6.5+svn4324-1 > >>>>> spiceterm: 3.0-5 > >>>>> vncterm: 1.5-3 > >>>>> zfsutils-linux: 0.7.13-pve1~bpo2 > >>>>> > >>>>> > >>>>> Some help??? Sould I upgrade the server to 6.x?? > >>>>> > >>>>> Thanks > >>>>> > >>>>> --- > >>>>> Gilberto Nunes Ferreira > >>>>> > >>>>> (47) 3025-5907 > >>>>> (47) 99676-7530 - Whatsapp / Telegram > >>>>> > >>>>> Skype: gilberto.nunes36 > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < > >>>>> gilberto.nunes32 at gmail.com> escreveu: > >>>>> > >>>>>> Hi there > >>>>>> > >>>>>> I got a strage error last night. Vzdump complain about the > >>>>>> disk no exist or lvm volume in this case but the volume exist, > indeed! > >>>>>> In the morning I have do a manually backup and it's working fine... > >>>>>> Any advice? > >>>>>> > >>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) > >>>>>> 112: 2020-01-29 22:20:02 INFO: status = running > >>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup > >>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 > >>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' > >>>> 'local-lvm:vm-112-disk-0' 120G > >>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such > >>>> volume 'local-lvm:vm-112-disk-0' > >>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) > >>>>>> 116: 2020-01-29 22:20:23 INFO: status = running > >>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup > >>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 > >>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' > >>>> 'local-lvm:vm-116-disk-0' 100G > >>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such > >>>> volume 'local-lvm:vm-116-disk-0' > >>>>>> --- > >>>>>> Gilberto Nunes Ferreira > >>>>>> > >>>>>> (47) 3025-5907 > >>>>>> (47) 99676-7530 - Whatsapp / Telegram > >>>>>> > >>>>>> Skype: gilberto.nunes36 > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>> _______________________________________________ > >>>>> pve-user mailing list > >>>>> pve-user at pve.proxmox.com > >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>> -- > >>>> Zuzendari Teknikoa / Director T?cnico > >>>> Binovo IT Human Project, S.L. > >>>> Telf. 943569206 > >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >>>> www.binovo.es > >>>> > >>>> _______________________________________________ > >>>> pve-user mailing list > >>>> pve-user at pve.proxmox.com > >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>> > >>> _______________________________________________ > >>> pve-user mailing list > >>> pve-user at pve.proxmox.com > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > >> -- > >> Zuzendari Teknikoa / Director T?cnico > >> Binovo IT Human Project, S.L. > >> Telf. 943569206 > >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >> www.binovo.es > >> > >> _______________________________________________ > >> pve-user mailing list > >> pve-user at pve.proxmox.com > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > -- > Zuzendari Teknikoa / Director T?cnico > Binovo IT Human Project, S.L. > Telf. 943569206 > Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > www.binovo.es > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From elacunza at binovo.es Thu Feb 13 13:28:45 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 13 Feb 2020 13:28:45 +0100 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: Message-ID: <0eaec1f1-f6e9-1c58-2595-dced38bf9932@binovo.es> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of ideas now, sorry!!! ;) El 13/2/20 a las 13:24, Gilberto Nunes escribi?: > I can assure you... the disk is there! > > pvesm list local-lvm > local-lvm:vm-101-disk-0 raw 53687091200 101 > local-lvm:vm-102-disk-0 raw 536870912000 102 > local-lvm:vm-103-disk-0 raw 322122547200 103 > local-lvm:vm-104-disk-0 raw 214748364800 104 > local-lvm:vm-104-state-LUKPLAS raw 17704157184 104 > local-lvm:vm-105-disk-0 raw 751619276800 105 > local-lvm:vm-106-disk-0 raw 161061273600 106 > local-lvm:vm-107-disk-0 raw 536870912000 107 > local-lvm:vm-108-disk-0 raw 214748364800 108 > local-lvm:vm-109-disk-0 raw 107374182400 109 > local-lvm:vm-110-disk-0 raw 107374182400 110 > local-lvm:vm-111-disk-0 raw 107374182400 111 > local-lvm:vm-112-disk-0 raw 128849018880 112 > local-lvm:vm-113-disk-0 raw 53687091200 113 > local-lvm:vm-113-state-antes_balloon raw 17704157184 113 > local-lvm:vm-114-disk-0 raw 128849018880 114 > local-lvm:vm-115-disk-0 raw 107374182400 115 > local-lvm:vm-115-disk-1 raw 53687091200 115 > local-lvm:vm-116-disk-0 raw 107374182400 116 > local-lvm:vm-117-disk-0 raw 107374182400 117 > local-lvm:vm-118-disk-0 raw 107374182400 118 > local-lvm:vm-119-disk-0 raw 26843545600 119 > local-lvm:vm-121-disk-0 raw 107374182400 121 > local-lvm:vm-122-disk-0 raw 107374182400 122 > local-lvm:vm-123-disk-0 raw 161061273600 123 > local-lvm:vm-124-disk-0 raw 107374182400 124 > local-lvm:vm-125-disk-0 raw 53687091200 125 > local-lvm:vm-126-disk-0 raw 32212254720 126 > local-lvm:vm-127-disk-0 raw 53687091200 127 > local-lvm:vm-129-disk-0 raw 21474836480 129 > > ls -l /dev/pve/vm-110-disk-0 > lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> ../dm-15 > > > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qui., 13 de fev. de 2020 ?s 09:19, Eneko Lacunza > escreveu: > >> What about: >> >> pvesm list local-lvm >> ls -l /dev/pve/vm-110-disk-0 >> >> El 13/2/20 a las 12:40, Gilberto Nunes escribi?: >>> Quite strange to say the least >>> >>> >>> ls /dev/pve/* >>> /dev/pve/root /dev/pve/vm-109-disk-0 >>> /dev/pve/vm-118-disk-0 >>> /dev/pve/swap /dev/pve/vm-110-disk-0 >>> /dev/pve/vm-119-disk-0 >>> /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 >>> /dev/pve/vm-121-disk-0 >>> /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 >>> /dev/pve/vm-122-disk-0 >>> /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 >>> /dev/pve/vm-123-disk-0 >>> /dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon >>> /dev/pve/vm-124-disk-0 >>> /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 >>> /dev/pve/vm-125-disk-0 >>> /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 >>> /dev/pve/vm-126-disk-0 >>> /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 >>> /dev/pve/vm-127-disk-0 >>> /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 >>> /dev/pve/vm-129-disk-0 >>> /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 >>> >>> ls /dev/mapper/ >>> control pve-vm--104--state--LUKPLAS >>> pve-vm--115--disk--1 >>> iscsi-backup pve-vm--105--disk--0 >>> pve-vm--116--disk--0 >>> mpatha pve-vm--106--disk--0 >>> pve-vm--117--disk--0 >>> pve-data pve-vm--107--disk--0 >>> pve-vm--118--disk--0 >>> pve-data_tdata pve-vm--108--disk--0 >>> pve-vm--119--disk--0 >>> pve-data_tmeta pve-vm--109--disk--0 >>> pve-vm--121--disk--0 >>> pve-data-tpool pve-vm--110--disk--0 >>> pve-vm--122--disk--0 >>> pve-root pve-vm--111--disk--0 >>> pve-vm--123--disk--0 >>> pve-swap pve-vm--112--disk--0 >>> pve-vm--124--disk--0 >>> pve-vm--101--disk--0 pve-vm--113--disk--0 >>> pve-vm--125--disk--0 >>> pve-vm--102--disk--0 pve-vm--113--state--antes_balloon >>> pve-vm--126--disk--0 >>> pve-vm--103--disk--0 pve-vm--114--disk--0 >>> pve-vm--127--disk--0 >>> pve-vm--104--disk--0 pve-vm--115--disk--0 >>> pve-vm--129--disk--0 >>> >>> >>> --- >>> Gilberto Nunes Ferreira >>> >>> (47) 3025-5907 >>> (47) 99676-7530 - Whatsapp / Telegram >>> >>> Skype: gilberto.nunes36 >>> >>> >>> >>> >>> >>> Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza >>> escreveu: >>> >>>> It's quite strange, what about "ls /dev/pve/*"? >>>> >>>> El 13/2/20 a las 12:18, Gilberto Nunes escribi?: >>>>> n: Thu Feb 13 07:06:19 2020 >>>>> a2web:~# lvs >>>>> LV VG Attr LSize Pool >> Origin >>>>> Data% Meta% Move Log Cpy%Sync Convert >>>>> backup iscsi -wi-ao---- 1.61t >>>>> >>>>> data pve twi-aotz-- 3.34t >>>>> 88.21 9.53 >>>>> root pve -wi-ao---- 96.00g >>>>> >>>>> snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g data >>>>> vm-104-disk-0 >>>>> snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g data >>>>> vm-113-disk-0 >>>>> swap pve -wi-ao---- 8.00g >>>>> >>>>> vm-101-disk-0 pve Vwi-aotz-- 50.00g data >>>>> 24.17 >>>>> vm-102-disk-0 pve Vwi-aotz-- 500.00g data >>>>> 65.65 >>>>> vm-103-disk-0 pve Vwi-aotz-- 300.00g data >>>>> 37.28 >>>>> vm-104-disk-0 pve Vwi-aotz-- 200.00g data >>>>> 17.87 >>>>> vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g data >>>>> 35.53 >>>>> vm-105-disk-0 pve Vwi-aotz-- 700.00g data >>>>> 90.18 >>>>> vm-106-disk-0 pve Vwi-aotz-- 150.00g data >>>>> 93.55 >>>>> vm-107-disk-0 pve Vwi-aotz-- 500.00g data >>>>> 98.20 >>>>> vm-108-disk-0 pve Vwi-aotz-- 200.00g data >>>>> 98.02 >>>>> vm-109-disk-0 pve Vwi-aotz-- 100.00g data >>>>> 93.68 >>>>> vm-110-disk-0 pve Vwi-aotz-- 100.00g data >>>>> 34.55 >>>>> vm-111-disk-0 pve Vwi-aotz-- 100.00g data >>>>> 79.03 >>>>> vm-112-disk-0 pve Vwi-aotz-- 120.00g data >>>>> 93.78 >>>>> vm-113-disk-0 pve Vwi-aotz-- 50.00g data >>>>> 65.42 >>>>> vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g data >>>>> 43.64 >>>>> vm-114-disk-0 pve Vwi-aotz-- 120.00g data >>>>> 100.00 >>>>> vm-115-disk-0 pve Vwi-a-tz-- 100.00g data >>>>> 70.28 >>>>> vm-115-disk-1 pve Vwi-a-tz-- 50.00g data >>>>> 0.00 >>>>> vm-116-disk-0 pve Vwi-aotz-- 100.00g data >>>>> 26.34 >>>>> vm-117-disk-0 pve Vwi-aotz-- 100.00g data >>>>> 100.00 >>>>> vm-118-disk-0 pve Vwi-aotz-- 100.00g data >>>>> 100.00 >>>>> vm-119-disk-0 pve Vwi-aotz-- 25.00g data >>>>> 18.42 >>>>> vm-121-disk-0 pve Vwi-aotz-- 100.00g data >>>>> 23.76 >>>>> vm-122-disk-0 pve Vwi-aotz-- 100.00g data >>>>> 100.00 >>>>> vm-123-disk-0 pve Vwi-aotz-- 150.00g data >>>>> 37.89 >>>>> vm-124-disk-0 pve Vwi-aotz-- 100.00g data >>>>> 30.73 >>>>> vm-125-disk-0 pve Vwi-aotz-- 50.00g data >>>>> 9.02 >>>>> vm-126-disk-0 pve Vwi-aotz-- 30.00g data >>>>> 99.72 >>>>> vm-127-disk-0 pve Vwi-aotz-- 50.00g data >>>>> 10.79 >>>>> vm-129-disk-0 pve Vwi-aotz-- 20.00g data >>>>> 45.04 >>>>> >>>>> cat /etc/pve/storage.cfg >>>>> dir: local >>>>> path /var/lib/vz >>>>> content backup,iso,vztmpl >>>>> >>>>> lvmthin: local-lvm >>>>> thinpool data >>>>> vgname pve >>>>> content rootdir,images >>>>> >>>>> iscsi: iscsi >>>>> portal some-portal >>>>> target some-target >>>>> content images >>>>> >>>>> lvm: iscsi-lvm >>>>> vgname iscsi >>>>> base iscsi:0.0.0.scsi-mpatha >>>>> content rootdir,images >>>>> shared 1 >>>>> >>>>> dir: backup >>>>> path /backup >>>>> content images,rootdir,iso,backup >>>>> maxfiles 3 >>>>> shared 0 >>>>> --- >>>>> Gilberto Nunes Ferreira >>>>> >>>>> (47) 3025-5907 >>>>> (47) 99676-7530 - Whatsapp / Telegram >>>>> >>>>> Skype: gilberto.nunes36 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza < >> elacunza at binovo.es> >>>>> escreveu: >>>>> >>>>>> Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? >>>>>> >>>>>> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: >>>>>>> HI all >>>>>>> >>>>>>> Still in trouble with this issue >>>>>>> >>>>>>> cat daemon.log | grep "Feb 12 22:10" >>>>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication >>>>>> runner... >>>>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication >>>> runner. >>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 >>>>>> (qemu) >>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - >>>> no >>>>>>> such volume 'local-lvm:vm-110-disk-0' >>>>>>> >>>>>>> syslog >>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM 110 >>>>>> (qemu) >>>>>>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock >>>> backup >>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 failed - >>>> no >>>>>>> such volume 'local-lvm:vm-110-disk-0' >>>>>>> >>>>>>> pveversion >>>>>>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) >>>>>>> >>>>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) >>>>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) >>>>>>> pve-kernel-4.15: 5.4-12 >>>>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52 >>>>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36 >>>>>>> corosync: 2.4.4-pve1 >>>>>>> criu: 2.11.1-1~bpo90 >>>>>>> glusterfs-client: 3.8.8-1 >>>>>>> ksm-control-daemon: 1.2-2 >>>>>>> libjs-extjs: 6.0.1-2 >>>>>>> libpve-access-control: 5.1-12 >>>>>>> libpve-apiclient-perl: 2.0-5 >>>>>>> libpve-common-perl: 5.0-56 >>>>>>> libpve-guest-common-perl: 2.0-20 >>>>>>> libpve-http-server-perl: 2.0-14 >>>>>>> libpve-storage-perl: 5.0-44 >>>>>>> libqb0: 1.0.3-1~bpo9 >>>>>>> lvm2: 2.02.168-pve6 >>>>>>> lxc-pve: 3.1.0-7 >>>>>>> lxcfs: 3.0.3-pve1 >>>>>>> novnc-pve: 1.0.0-3 >>>>>>> proxmox-widget-toolkit: 1.0-28 >>>>>>> pve-cluster: 5.0-38 >>>>>>> pve-container: 2.0-41 >>>>>>> pve-docs: 5.4-2 >>>>>>> pve-edk2-firmware: 1.20190312-1 >>>>>>> pve-firewall: 3.0-22 >>>>>>> pve-firmware: 2.0-7 >>>>>>> pve-ha-manager: 2.0-9 >>>>>>> pve-i18n: 1.1-4 >>>>>>> pve-libspice-server1: 0.14.1-2 >>>>>>> pve-qemu-kvm: 3.0.1-4 >>>>>>> pve-xtermjs: 3.12.0-1 >>>>>>> qemu-server: 5.0-55 >>>>>>> smartmontools: 6.5+svn4324-1 >>>>>>> spiceterm: 3.0-5 >>>>>>> vncterm: 1.5-3 >>>>>>> zfsutils-linux: 0.7.13-pve1~bpo2 >>>>>>> >>>>>>> >>>>>>> Some help??? Sould I upgrade the server to 6.x?? >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> --- >>>>>>> Gilberto Nunes Ferreira >>>>>>> >>>>>>> (47) 3025-5907 >>>>>>> (47) 99676-7530 - Whatsapp / Telegram >>>>>>> >>>>>>> Skype: gilberto.nunes36 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < >>>>>>> gilberto.nunes32 at gmail.com> escreveu: >>>>>>> >>>>>>>> Hi there >>>>>>>> >>>>>>>> I got a strage error last night. Vzdump complain about the >>>>>>>> disk no exist or lvm volume in this case but the volume exist, >> indeed! >>>>>>>> In the morning I have do a manually backup and it's working fine... >>>>>>>> Any advice? >>>>>>>> >>>>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) >>>>>>>> 112: 2020-01-29 22:20:02 INFO: status = running >>>>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup >>>>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 >>>>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' >>>>>> 'local-lvm:vm-112-disk-0' 120G >>>>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such >>>>>> volume 'local-lvm:vm-112-disk-0' >>>>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) >>>>>>>> 116: 2020-01-29 22:20:23 INFO: status = running >>>>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup >>>>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 >>>>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' >>>>>> 'local-lvm:vm-116-disk-0' 100G >>>>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such >>>>>> volume 'local-lvm:vm-116-disk-0' >>>>>>>> --- >>>>>>>> Gilberto Nunes Ferreira >>>>>>>> >>>>>>>> (47) 3025-5907 >>>>>>>> (47) 99676-7530 - Whatsapp / Telegram >>>>>>>> >>>>>>>> Skype: gilberto.nunes36 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> pve-user mailing list >>>>>>> pve-user at pve.proxmox.com >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>>> -- >>>>>> Zuzendari Teknikoa / Director T?cnico >>>>>> Binovo IT Human Project, S.L. >>>>>> Telf. 943569206 >>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >>>>>> www.binovo.es >>>>>> >>>>>> _______________________________________________ >>>>>> pve-user mailing list >>>>>> pve-user at pve.proxmox.com >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>>> >>>>> _______________________________________________ >>>>> pve-user mailing list >>>>> pve-user at pve.proxmox.com >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> -- >>>> Zuzendari Teknikoa / Director T?cnico >>>> Binovo IT Human Project, S.L. >>>> Telf. 943569206 >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >>>> www.binovo.es >>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at pve.proxmox.com >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> -- >> Zuzendari Teknikoa / Director T?cnico >> Binovo IT Human Project, S.L. >> Telf. 943569206 >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >> www.binovo.es >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From gilberto.nunes32 at gmail.com Thu Feb 13 13:42:35 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 13 Feb 2020 09:42:35 -0300 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: <0eaec1f1-f6e9-1c58-2595-dced38bf9932@binovo.es> References: <0eaec1f1-f6e9-1c58-2595-dced38bf9932@binovo.es> Message-ID: Yeah! Me too... This problem is pretty random... Let see next week! --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 13 de fev. de 2020 ?s 09:29, Eneko Lacunza escreveu: > > Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of ideas > now, sorry!!! ;) > > El 13/2/20 a las 13:24, Gilberto Nunes escribi?: > > I can assure you... the disk is there! > > > > pvesm list local-lvm > > local-lvm:vm-101-disk-0 raw 53687091200 101 > > local-lvm:vm-102-disk-0 raw 536870912000 102 > > local-lvm:vm-103-disk-0 raw 322122547200 103 > > local-lvm:vm-104-disk-0 raw 214748364800 104 > > local-lvm:vm-104-state-LUKPLAS raw 17704157184 104 > > local-lvm:vm-105-disk-0 raw 751619276800 105 > > local-lvm:vm-106-disk-0 raw 161061273600 106 > > local-lvm:vm-107-disk-0 raw 536870912000 107 > > local-lvm:vm-108-disk-0 raw 214748364800 108 > > local-lvm:vm-109-disk-0 raw 107374182400 109 > > local-lvm:vm-110-disk-0 raw 107374182400 110 > > local-lvm:vm-111-disk-0 raw 107374182400 111 > > local-lvm:vm-112-disk-0 raw 128849018880 112 > > local-lvm:vm-113-disk-0 raw 53687091200 113 > > local-lvm:vm-113-state-antes_balloon raw 17704157184 113 > > local-lvm:vm-114-disk-0 raw 128849018880 114 > > local-lvm:vm-115-disk-0 raw 107374182400 115 > > local-lvm:vm-115-disk-1 raw 53687091200 115 > > local-lvm:vm-116-disk-0 raw 107374182400 116 > > local-lvm:vm-117-disk-0 raw 107374182400 117 > > local-lvm:vm-118-disk-0 raw 107374182400 118 > > local-lvm:vm-119-disk-0 raw 26843545600 119 > > local-lvm:vm-121-disk-0 raw 107374182400 121 > > local-lvm:vm-122-disk-0 raw 107374182400 122 > > local-lvm:vm-123-disk-0 raw 161061273600 123 > > local-lvm:vm-124-disk-0 raw 107374182400 124 > > local-lvm:vm-125-disk-0 raw 53687091200 125 > > local-lvm:vm-126-disk-0 raw 32212254720 126 > > local-lvm:vm-127-disk-0 raw 53687091200 127 > > local-lvm:vm-129-disk-0 raw 21474836480 129 > > > > ls -l /dev/pve/vm-110-disk-0 > > lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> ../dm-15 > > > > > > --- > > Gilberto Nunes Ferreira > > > > (47) 3025-5907 > > (47) 99676-7530 - Whatsapp / Telegram > > > > Skype: gilberto.nunes36 > > > > > > > > > > > > Em qui., 13 de fev. de 2020 ?s 09:19, Eneko Lacunza > > escreveu: > > > >> What about: > >> > >> pvesm list local-lvm > >> ls -l /dev/pve/vm-110-disk-0 > >> > >> El 13/2/20 a las 12:40, Gilberto Nunes escribi?: > >>> Quite strange to say the least > >>> > >>> > >>> ls /dev/pve/* > >>> /dev/pve/root /dev/pve/vm-109-disk-0 > >>> /dev/pve/vm-118-disk-0 > >>> /dev/pve/swap /dev/pve/vm-110-disk-0 > >>> /dev/pve/vm-119-disk-0 > >>> /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 > >>> /dev/pve/vm-121-disk-0 > >>> /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 > >>> /dev/pve/vm-122-disk-0 > >>> /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 > >>> /dev/pve/vm-123-disk-0 > >>> /dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon > >>> /dev/pve/vm-124-disk-0 > >>> /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 > >>> /dev/pve/vm-125-disk-0 > >>> /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 > >>> /dev/pve/vm-126-disk-0 > >>> /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 > >>> /dev/pve/vm-127-disk-0 > >>> /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 > >>> /dev/pve/vm-129-disk-0 > >>> /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 > >>> > >>> ls /dev/mapper/ > >>> control pve-vm--104--state--LUKPLAS > >>> pve-vm--115--disk--1 > >>> iscsi-backup pve-vm--105--disk--0 > >>> pve-vm--116--disk--0 > >>> mpatha pve-vm--106--disk--0 > >>> pve-vm--117--disk--0 > >>> pve-data pve-vm--107--disk--0 > >>> pve-vm--118--disk--0 > >>> pve-data_tdata pve-vm--108--disk--0 > >>> pve-vm--119--disk--0 > >>> pve-data_tmeta pve-vm--109--disk--0 > >>> pve-vm--121--disk--0 > >>> pve-data-tpool pve-vm--110--disk--0 > >>> pve-vm--122--disk--0 > >>> pve-root pve-vm--111--disk--0 > >>> pve-vm--123--disk--0 > >>> pve-swap pve-vm--112--disk--0 > >>> pve-vm--124--disk--0 > >>> pve-vm--101--disk--0 pve-vm--113--disk--0 > >>> pve-vm--125--disk--0 > >>> pve-vm--102--disk--0 pve-vm--113--state--antes_balloon > >>> pve-vm--126--disk--0 > >>> pve-vm--103--disk--0 pve-vm--114--disk--0 > >>> pve-vm--127--disk--0 > >>> pve-vm--104--disk--0 pve-vm--115--disk--0 > >>> pve-vm--129--disk--0 > >>> > >>> > >>> --- > >>> Gilberto Nunes Ferreira > >>> > >>> (47) 3025-5907 > >>> (47) 99676-7530 - Whatsapp / Telegram > >>> > >>> Skype: gilberto.nunes36 > >>> > >>> > >>> > >>> > >>> > >>> Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza < > elacunza at binovo.es> > >>> escreveu: > >>> > >>>> It's quite strange, what about "ls /dev/pve/*"? > >>>> > >>>> El 13/2/20 a las 12:18, Gilberto Nunes escribi?: > >>>>> n: Thu Feb 13 07:06:19 2020 > >>>>> a2web:~# lvs > >>>>> LV VG Attr LSize Pool > >> Origin > >>>>> Data% Meta% Move Log Cpy%Sync Convert > >>>>> backup iscsi -wi-ao---- 1.61t > >>>>> > >>>>> data pve twi-aotz-- 3.34t > >>>>> 88.21 9.53 > >>>>> root pve -wi-ao---- 96.00g > >>>>> > >>>>> snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g data > >>>>> vm-104-disk-0 > >>>>> snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g data > >>>>> vm-113-disk-0 > >>>>> swap pve -wi-ao---- 8.00g > >>>>> > >>>>> vm-101-disk-0 pve Vwi-aotz-- 50.00g data > >>>>> 24.17 > >>>>> vm-102-disk-0 pve Vwi-aotz-- 500.00g data > >>>>> 65.65 > >>>>> vm-103-disk-0 pve Vwi-aotz-- 300.00g data > >>>>> 37.28 > >>>>> vm-104-disk-0 pve Vwi-aotz-- 200.00g data > >>>>> 17.87 > >>>>> vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g data > >>>>> 35.53 > >>>>> vm-105-disk-0 pve Vwi-aotz-- 700.00g data > >>>>> 90.18 > >>>>> vm-106-disk-0 pve Vwi-aotz-- 150.00g data > >>>>> 93.55 > >>>>> vm-107-disk-0 pve Vwi-aotz-- 500.00g data > >>>>> 98.20 > >>>>> vm-108-disk-0 pve Vwi-aotz-- 200.00g data > >>>>> 98.02 > >>>>> vm-109-disk-0 pve Vwi-aotz-- 100.00g data > >>>>> 93.68 > >>>>> vm-110-disk-0 pve Vwi-aotz-- 100.00g data > >>>>> 34.55 > >>>>> vm-111-disk-0 pve Vwi-aotz-- 100.00g data > >>>>> 79.03 > >>>>> vm-112-disk-0 pve Vwi-aotz-- 120.00g data > >>>>> 93.78 > >>>>> vm-113-disk-0 pve Vwi-aotz-- 50.00g data > >>>>> 65.42 > >>>>> vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g data > >>>>> 43.64 > >>>>> vm-114-disk-0 pve Vwi-aotz-- 120.00g data > >>>>> 100.00 > >>>>> vm-115-disk-0 pve Vwi-a-tz-- 100.00g data > >>>>> 70.28 > >>>>> vm-115-disk-1 pve Vwi-a-tz-- 50.00g data > >>>>> 0.00 > >>>>> vm-116-disk-0 pve Vwi-aotz-- 100.00g data > >>>>> 26.34 > >>>>> vm-117-disk-0 pve Vwi-aotz-- 100.00g data > >>>>> 100.00 > >>>>> vm-118-disk-0 pve Vwi-aotz-- 100.00g data > >>>>> 100.00 > >>>>> vm-119-disk-0 pve Vwi-aotz-- 25.00g data > >>>>> 18.42 > >>>>> vm-121-disk-0 pve Vwi-aotz-- 100.00g data > >>>>> 23.76 > >>>>> vm-122-disk-0 pve Vwi-aotz-- 100.00g data > >>>>> 100.00 > >>>>> vm-123-disk-0 pve Vwi-aotz-- 150.00g data > >>>>> 37.89 > >>>>> vm-124-disk-0 pve Vwi-aotz-- 100.00g data > >>>>> 30.73 > >>>>> vm-125-disk-0 pve Vwi-aotz-- 50.00g data > >>>>> 9.02 > >>>>> vm-126-disk-0 pve Vwi-aotz-- 30.00g data > >>>>> 99.72 > >>>>> vm-127-disk-0 pve Vwi-aotz-- 50.00g data > >>>>> 10.79 > >>>>> vm-129-disk-0 pve Vwi-aotz-- 20.00g data > >>>>> 45.04 > >>>>> > >>>>> cat /etc/pve/storage.cfg > >>>>> dir: local > >>>>> path /var/lib/vz > >>>>> content backup,iso,vztmpl > >>>>> > >>>>> lvmthin: local-lvm > >>>>> thinpool data > >>>>> vgname pve > >>>>> content rootdir,images > >>>>> > >>>>> iscsi: iscsi > >>>>> portal some-portal > >>>>> target some-target > >>>>> content images > >>>>> > >>>>> lvm: iscsi-lvm > >>>>> vgname iscsi > >>>>> base iscsi:0.0.0.scsi-mpatha > >>>>> content rootdir,images > >>>>> shared 1 > >>>>> > >>>>> dir: backup > >>>>> path /backup > >>>>> content images,rootdir,iso,backup > >>>>> maxfiles 3 > >>>>> shared 0 > >>>>> --- > >>>>> Gilberto Nunes Ferreira > >>>>> > >>>>> (47) 3025-5907 > >>>>> (47) 99676-7530 - Whatsapp / Telegram > >>>>> > >>>>> Skype: gilberto.nunes36 > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza < > >> elacunza at binovo.es> > >>>>> escreveu: > >>>>> > >>>>>> Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? > >>>>>> > >>>>>> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: > >>>>>>> HI all > >>>>>>> > >>>>>>> Still in trouble with this issue > >>>>>>> > >>>>>>> cat daemon.log | grep "Feb 12 22:10" > >>>>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication > >>>>>> runner... > >>>>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication > >>>> runner. > >>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM > 110 > >>>>>> (qemu) > >>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 > failed - > >>>> no > >>>>>>> such volume 'local-lvm:vm-110-disk-0' > >>>>>>> > >>>>>>> syslog > >>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM > 110 > >>>>>> (qemu) > >>>>>>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock > >>>> backup > >>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 > failed - > >>>> no > >>>>>>> such volume 'local-lvm:vm-110-disk-0' > >>>>>>> > >>>>>>> pveversion > >>>>>>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) > >>>>>>> > >>>>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) > >>>>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) > >>>>>>> pve-kernel-4.15: 5.4-12 > >>>>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52 > >>>>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36 > >>>>>>> corosync: 2.4.4-pve1 > >>>>>>> criu: 2.11.1-1~bpo90 > >>>>>>> glusterfs-client: 3.8.8-1 > >>>>>>> ksm-control-daemon: 1.2-2 > >>>>>>> libjs-extjs: 6.0.1-2 > >>>>>>> libpve-access-control: 5.1-12 > >>>>>>> libpve-apiclient-perl: 2.0-5 > >>>>>>> libpve-common-perl: 5.0-56 > >>>>>>> libpve-guest-common-perl: 2.0-20 > >>>>>>> libpve-http-server-perl: 2.0-14 > >>>>>>> libpve-storage-perl: 5.0-44 > >>>>>>> libqb0: 1.0.3-1~bpo9 > >>>>>>> lvm2: 2.02.168-pve6 > >>>>>>> lxc-pve: 3.1.0-7 > >>>>>>> lxcfs: 3.0.3-pve1 > >>>>>>> novnc-pve: 1.0.0-3 > >>>>>>> proxmox-widget-toolkit: 1.0-28 > >>>>>>> pve-cluster: 5.0-38 > >>>>>>> pve-container: 2.0-41 > >>>>>>> pve-docs: 5.4-2 > >>>>>>> pve-edk2-firmware: 1.20190312-1 > >>>>>>> pve-firewall: 3.0-22 > >>>>>>> pve-firmware: 2.0-7 > >>>>>>> pve-ha-manager: 2.0-9 > >>>>>>> pve-i18n: 1.1-4 > >>>>>>> pve-libspice-server1: 0.14.1-2 > >>>>>>> pve-qemu-kvm: 3.0.1-4 > >>>>>>> pve-xtermjs: 3.12.0-1 > >>>>>>> qemu-server: 5.0-55 > >>>>>>> smartmontools: 6.5+svn4324-1 > >>>>>>> spiceterm: 3.0-5 > >>>>>>> vncterm: 1.5-3 > >>>>>>> zfsutils-linux: 0.7.13-pve1~bpo2 > >>>>>>> > >>>>>>> > >>>>>>> Some help??? Sould I upgrade the server to 6.x?? > >>>>>>> > >>>>>>> Thanks > >>>>>>> > >>>>>>> --- > >>>>>>> Gilberto Nunes Ferreira > >>>>>>> > >>>>>>> (47) 3025-5907 > >>>>>>> (47) 99676-7530 - Whatsapp / Telegram > >>>>>>> > >>>>>>> Skype: gilberto.nunes36 > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < > >>>>>>> gilberto.nunes32 at gmail.com> escreveu: > >>>>>>> > >>>>>>>> Hi there > >>>>>>>> > >>>>>>>> I got a strage error last night. Vzdump complain about the > >>>>>>>> disk no exist or lvm volume in this case but the volume exist, > >> indeed! > >>>>>>>> In the morning I have do a manually backup and it's working > fine... > >>>>>>>> Any advice? > >>>>>>>> > >>>>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) > >>>>>>>> 112: 2020-01-29 22:20:02 INFO: status = running > >>>>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup > >>>>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 > >>>>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' > >>>>>> 'local-lvm:vm-112-disk-0' 120G > >>>>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such > >>>>>> volume 'local-lvm:vm-112-disk-0' > >>>>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) > >>>>>>>> 116: 2020-01-29 22:20:23 INFO: status = running > >>>>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup > >>>>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 > >>>>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' > >>>>>> 'local-lvm:vm-116-disk-0' 100G > >>>>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such > >>>>>> volume 'local-lvm:vm-116-disk-0' > >>>>>>>> --- > >>>>>>>> Gilberto Nunes Ferreira > >>>>>>>> > >>>>>>>> (47) 3025-5907 > >>>>>>>> (47) 99676-7530 - Whatsapp / Telegram > >>>>>>>> > >>>>>>>> Skype: gilberto.nunes36 > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> pve-user mailing list > >>>>>>> pve-user at pve.proxmox.com > >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>>>> -- > >>>>>> Zuzendari Teknikoa / Director T?cnico > >>>>>> Binovo IT Human Project, S.L. > >>>>>> Telf. 943569206 > >>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >>>>>> www.binovo.es > >>>>>> > >>>>>> _______________________________________________ > >>>>>> pve-user mailing list > >>>>>> pve-user at pve.proxmox.com > >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>>>> > >>>>> _______________________________________________ > >>>>> pve-user mailing list > >>>>> pve-user at pve.proxmox.com > >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>> -- > >>>> Zuzendari Teknikoa / Director T?cnico > >>>> Binovo IT Human Project, S.L. > >>>> Telf. 943569206 > >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >>>> www.binovo.es > >>>> > >>>> _______________________________________________ > >>>> pve-user mailing list > >>>> pve-user at pve.proxmox.com > >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>> > >>> _______________________________________________ > >>> pve-user mailing list > >>> pve-user at pve.proxmox.com > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > >> -- > >> Zuzendari Teknikoa / Director T?cnico > >> Binovo IT Human Project, S.L. > >> Telf. 943569206 > >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >> www.binovo.es > >> > >> _______________________________________________ > >> pve-user mailing list > >> pve-user at pve.proxmox.com > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > -- > Zuzendari Teknikoa / Director T?cnico > Binovo IT Human Project, S.L. > Telf. 943569206 > Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > www.binovo.es > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From atilav at lightspeed.ca Thu Feb 13 23:53:35 2020 From: atilav at lightspeed.ca (Atila Vasconcelos) Date: Thu, 13 Feb 2020 14:53:35 -0800 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: <0eaec1f1-f6e9-1c58-2595-dced38bf9932@binovo.es> Message-ID: <754b8bc0-419b-d9e5-0c11-400b25e1d916@lightspeed.ca> Hi, I had the same problem in the past and it repeats once a while.... its very random; I could not find any way to reproduce it. But as it happens... it will go away. When you are almost forgetting about it, it will come again ;) I just learned to ignore it (and do manually the backup when it fails) I see in proxmox 6.x it is less frequent (but still happening once a while). ABV On 2020-02-13 4:42 a.m., Gilberto Nunes wrote: > Yeah! Me too... This problem is pretty random... Let see next week! > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qui., 13 de fev. de 2020 ?s 09:29, Eneko Lacunza > escreveu: > >> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of ideas >> now, sorry!!! ;) >> >> El 13/2/20 a las 13:24, Gilberto Nunes escribi?: >>> I can assure you... the disk is there! >>> >>> pvesm list local-lvm >>> local-lvm:vm-101-disk-0 raw 53687091200 101 >>> local-lvm:vm-102-disk-0 raw 536870912000 102 >>> local-lvm:vm-103-disk-0 raw 322122547200 103 >>> local-lvm:vm-104-disk-0 raw 214748364800 104 >>> local-lvm:vm-104-state-LUKPLAS raw 17704157184 104 >>> local-lvm:vm-105-disk-0 raw 751619276800 105 >>> local-lvm:vm-106-disk-0 raw 161061273600 106 >>> local-lvm:vm-107-disk-0 raw 536870912000 107 >>> local-lvm:vm-108-disk-0 raw 214748364800 108 >>> local-lvm:vm-109-disk-0 raw 107374182400 109 >>> local-lvm:vm-110-disk-0 raw 107374182400 110 >>> local-lvm:vm-111-disk-0 raw 107374182400 111 >>> local-lvm:vm-112-disk-0 raw 128849018880 112 >>> local-lvm:vm-113-disk-0 raw 53687091200 113 >>> local-lvm:vm-113-state-antes_balloon raw 17704157184 113 >>> local-lvm:vm-114-disk-0 raw 128849018880 114 >>> local-lvm:vm-115-disk-0 raw 107374182400 115 >>> local-lvm:vm-115-disk-1 raw 53687091200 115 >>> local-lvm:vm-116-disk-0 raw 107374182400 116 >>> local-lvm:vm-117-disk-0 raw 107374182400 117 >>> local-lvm:vm-118-disk-0 raw 107374182400 118 >>> local-lvm:vm-119-disk-0 raw 26843545600 119 >>> local-lvm:vm-121-disk-0 raw 107374182400 121 >>> local-lvm:vm-122-disk-0 raw 107374182400 122 >>> local-lvm:vm-123-disk-0 raw 161061273600 123 >>> local-lvm:vm-124-disk-0 raw 107374182400 124 >>> local-lvm:vm-125-disk-0 raw 53687091200 125 >>> local-lvm:vm-126-disk-0 raw 32212254720 126 >>> local-lvm:vm-127-disk-0 raw 53687091200 127 >>> local-lvm:vm-129-disk-0 raw 21474836480 129 >>> >>> ls -l /dev/pve/vm-110-disk-0 >>> lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> ../dm-15 >>> >>> >>> --- >>> Gilberto Nunes Ferreira >>> >>> (47) 3025-5907 >>> (47) 99676-7530 - Whatsapp / Telegram >>> >>> Skype: gilberto.nunes36 >>> >>> >>> >>> >>> >>> Em qui., 13 de fev. de 2020 ?s 09:19, Eneko Lacunza >>> escreveu: >>> >>>> What about: >>>> >>>> pvesm list local-lvm >>>> ls -l /dev/pve/vm-110-disk-0 >>>> >>>> El 13/2/20 a las 12:40, Gilberto Nunes escribi?: >>>>> Quite strange to say the least >>>>> >>>>> >>>>> ls /dev/pve/* >>>>> /dev/pve/root /dev/pve/vm-109-disk-0 >>>>> /dev/pve/vm-118-disk-0 >>>>> /dev/pve/swap /dev/pve/vm-110-disk-0 >>>>> /dev/pve/vm-119-disk-0 >>>>> /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 >>>>> /dev/pve/vm-121-disk-0 >>>>> /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 >>>>> /dev/pve/vm-122-disk-0 >>>>> /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 >>>>> /dev/pve/vm-123-disk-0 >>>>> /dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon >>>>> /dev/pve/vm-124-disk-0 >>>>> /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 >>>>> /dev/pve/vm-125-disk-0 >>>>> /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 >>>>> /dev/pve/vm-126-disk-0 >>>>> /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 >>>>> /dev/pve/vm-127-disk-0 >>>>> /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 >>>>> /dev/pve/vm-129-disk-0 >>>>> /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 >>>>> >>>>> ls /dev/mapper/ >>>>> control pve-vm--104--state--LUKPLAS >>>>> pve-vm--115--disk--1 >>>>> iscsi-backup pve-vm--105--disk--0 >>>>> pve-vm--116--disk--0 >>>>> mpatha pve-vm--106--disk--0 >>>>> pve-vm--117--disk--0 >>>>> pve-data pve-vm--107--disk--0 >>>>> pve-vm--118--disk--0 >>>>> pve-data_tdata pve-vm--108--disk--0 >>>>> pve-vm--119--disk--0 >>>>> pve-data_tmeta pve-vm--109--disk--0 >>>>> pve-vm--121--disk--0 >>>>> pve-data-tpool pve-vm--110--disk--0 >>>>> pve-vm--122--disk--0 >>>>> pve-root pve-vm--111--disk--0 >>>>> pve-vm--123--disk--0 >>>>> pve-swap pve-vm--112--disk--0 >>>>> pve-vm--124--disk--0 >>>>> pve-vm--101--disk--0 pve-vm--113--disk--0 >>>>> pve-vm--125--disk--0 >>>>> pve-vm--102--disk--0 pve-vm--113--state--antes_balloon >>>>> pve-vm--126--disk--0 >>>>> pve-vm--103--disk--0 pve-vm--114--disk--0 >>>>> pve-vm--127--disk--0 >>>>> pve-vm--104--disk--0 pve-vm--115--disk--0 >>>>> pve-vm--129--disk--0 >>>>> >>>>> >>>>> --- >>>>> Gilberto Nunes Ferreira >>>>> >>>>> (47) 3025-5907 >>>>> (47) 99676-7530 - Whatsapp / Telegram >>>>> >>>>> Skype: gilberto.nunes36 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza < >> elacunza at binovo.es> >>>>> escreveu: >>>>> >>>>>> It's quite strange, what about "ls /dev/pve/*"? >>>>>> >>>>>> El 13/2/20 a las 12:18, Gilberto Nunes escribi?: >>>>>>> n: Thu Feb 13 07:06:19 2020 >>>>>>> a2web:~# lvs >>>>>>> LV VG Attr LSize Pool >>>> Origin >>>>>>> Data% Meta% Move Log Cpy%Sync Convert >>>>>>> backup iscsi -wi-ao---- 1.61t >>>>>>> >>>>>>> data pve twi-aotz-- 3.34t >>>>>>> 88.21 9.53 >>>>>>> root pve -wi-ao---- 96.00g >>>>>>> >>>>>>> snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g data >>>>>>> vm-104-disk-0 >>>>>>> snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g data >>>>>>> vm-113-disk-0 >>>>>>> swap pve -wi-ao---- 8.00g >>>>>>> >>>>>>> vm-101-disk-0 pve Vwi-aotz-- 50.00g data >>>>>>> 24.17 >>>>>>> vm-102-disk-0 pve Vwi-aotz-- 500.00g data >>>>>>> 65.65 >>>>>>> vm-103-disk-0 pve Vwi-aotz-- 300.00g data >>>>>>> 37.28 >>>>>>> vm-104-disk-0 pve Vwi-aotz-- 200.00g data >>>>>>> 17.87 >>>>>>> vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g data >>>>>>> 35.53 >>>>>>> vm-105-disk-0 pve Vwi-aotz-- 700.00g data >>>>>>> 90.18 >>>>>>> vm-106-disk-0 pve Vwi-aotz-- 150.00g data >>>>>>> 93.55 >>>>>>> vm-107-disk-0 pve Vwi-aotz-- 500.00g data >>>>>>> 98.20 >>>>>>> vm-108-disk-0 pve Vwi-aotz-- 200.00g data >>>>>>> 98.02 >>>>>>> vm-109-disk-0 pve Vwi-aotz-- 100.00g data >>>>>>> 93.68 >>>>>>> vm-110-disk-0 pve Vwi-aotz-- 100.00g data >>>>>>> 34.55 >>>>>>> vm-111-disk-0 pve Vwi-aotz-- 100.00g data >>>>>>> 79.03 >>>>>>> vm-112-disk-0 pve Vwi-aotz-- 120.00g data >>>>>>> 93.78 >>>>>>> vm-113-disk-0 pve Vwi-aotz-- 50.00g data >>>>>>> 65.42 >>>>>>> vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g data >>>>>>> 43.64 >>>>>>> vm-114-disk-0 pve Vwi-aotz-- 120.00g data >>>>>>> 100.00 >>>>>>> vm-115-disk-0 pve Vwi-a-tz-- 100.00g data >>>>>>> 70.28 >>>>>>> vm-115-disk-1 pve Vwi-a-tz-- 50.00g data >>>>>>> 0.00 >>>>>>> vm-116-disk-0 pve Vwi-aotz-- 100.00g data >>>>>>> 26.34 >>>>>>> vm-117-disk-0 pve Vwi-aotz-- 100.00g data >>>>>>> 100.00 >>>>>>> vm-118-disk-0 pve Vwi-aotz-- 100.00g data >>>>>>> 100.00 >>>>>>> vm-119-disk-0 pve Vwi-aotz-- 25.00g data >>>>>>> 18.42 >>>>>>> vm-121-disk-0 pve Vwi-aotz-- 100.00g data >>>>>>> 23.76 >>>>>>> vm-122-disk-0 pve Vwi-aotz-- 100.00g data >>>>>>> 100.00 >>>>>>> vm-123-disk-0 pve Vwi-aotz-- 150.00g data >>>>>>> 37.89 >>>>>>> vm-124-disk-0 pve Vwi-aotz-- 100.00g data >>>>>>> 30.73 >>>>>>> vm-125-disk-0 pve Vwi-aotz-- 50.00g data >>>>>>> 9.02 >>>>>>> vm-126-disk-0 pve Vwi-aotz-- 30.00g data >>>>>>> 99.72 >>>>>>> vm-127-disk-0 pve Vwi-aotz-- 50.00g data >>>>>>> 10.79 >>>>>>> vm-129-disk-0 pve Vwi-aotz-- 20.00g data >>>>>>> 45.04 >>>>>>> >>>>>>> cat /etc/pve/storage.cfg >>>>>>> dir: local >>>>>>> path /var/lib/vz >>>>>>> content backup,iso,vztmpl >>>>>>> >>>>>>> lvmthin: local-lvm >>>>>>> thinpool data >>>>>>> vgname pve >>>>>>> content rootdir,images >>>>>>> >>>>>>> iscsi: iscsi >>>>>>> portal some-portal >>>>>>> target some-target >>>>>>> content images >>>>>>> >>>>>>> lvm: iscsi-lvm >>>>>>> vgname iscsi >>>>>>> base iscsi:0.0.0.scsi-mpatha >>>>>>> content rootdir,images >>>>>>> shared 1 >>>>>>> >>>>>>> dir: backup >>>>>>> path /backup >>>>>>> content images,rootdir,iso,backup >>>>>>> maxfiles 3 >>>>>>> shared 0 >>>>>>> --- >>>>>>> Gilberto Nunes Ferreira >>>>>>> >>>>>>> (47) 3025-5907 >>>>>>> (47) 99676-7530 - Whatsapp / Telegram >>>>>>> >>>>>>> Skype: gilberto.nunes36 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza < >>>> elacunza at binovo.es> >>>>>>> escreveu: >>>>>>> >>>>>>>> Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? >>>>>>>> >>>>>>>> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: >>>>>>>>> HI all >>>>>>>>> >>>>>>>>> Still in trouble with this issue >>>>>>>>> >>>>>>>>> cat daemon.log | grep "Feb 12 22:10" >>>>>>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication >>>>>>>> runner... >>>>>>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication >>>>>> runner. >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM >> 110 >>>>>>>> (qemu) >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 >> failed - >>>>>> no >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' >>>>>>>>> >>>>>>>>> syslog >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM >> 110 >>>>>>>> (qemu) >>>>>>>>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock >>>>>> backup >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 >> failed - >>>>>> no >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' >>>>>>>>> >>>>>>>>> pveversion >>>>>>>>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) >>>>>>>>> >>>>>>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) >>>>>>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) >>>>>>>>> pve-kernel-4.15: 5.4-12 >>>>>>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52 >>>>>>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36 >>>>>>>>> corosync: 2.4.4-pve1 >>>>>>>>> criu: 2.11.1-1~bpo90 >>>>>>>>> glusterfs-client: 3.8.8-1 >>>>>>>>> ksm-control-daemon: 1.2-2 >>>>>>>>> libjs-extjs: 6.0.1-2 >>>>>>>>> libpve-access-control: 5.1-12 >>>>>>>>> libpve-apiclient-perl: 2.0-5 >>>>>>>>> libpve-common-perl: 5.0-56 >>>>>>>>> libpve-guest-common-perl: 2.0-20 >>>>>>>>> libpve-http-server-perl: 2.0-14 >>>>>>>>> libpve-storage-perl: 5.0-44 >>>>>>>>> libqb0: 1.0.3-1~bpo9 >>>>>>>>> lvm2: 2.02.168-pve6 >>>>>>>>> lxc-pve: 3.1.0-7 >>>>>>>>> lxcfs: 3.0.3-pve1 >>>>>>>>> novnc-pve: 1.0.0-3 >>>>>>>>> proxmox-widget-toolkit: 1.0-28 >>>>>>>>> pve-cluster: 5.0-38 >>>>>>>>> pve-container: 2.0-41 >>>>>>>>> pve-docs: 5.4-2 >>>>>>>>> pve-edk2-firmware: 1.20190312-1 >>>>>>>>> pve-firewall: 3.0-22 >>>>>>>>> pve-firmware: 2.0-7 >>>>>>>>> pve-ha-manager: 2.0-9 >>>>>>>>> pve-i18n: 1.1-4 >>>>>>>>> pve-libspice-server1: 0.14.1-2 >>>>>>>>> pve-qemu-kvm: 3.0.1-4 >>>>>>>>> pve-xtermjs: 3.12.0-1 >>>>>>>>> qemu-server: 5.0-55 >>>>>>>>> smartmontools: 6.5+svn4324-1 >>>>>>>>> spiceterm: 3.0-5 >>>>>>>>> vncterm: 1.5-3 >>>>>>>>> zfsutils-linux: 0.7.13-pve1~bpo2 >>>>>>>>> >>>>>>>>> >>>>>>>>> Some help??? Sould I upgrade the server to 6.x?? >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> >>>>>>>>> --- >>>>>>>>> Gilberto Nunes Ferreira >>>>>>>>> >>>>>>>>> (47) 3025-5907 >>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram >>>>>>>>> >>>>>>>>> Skype: gilberto.nunes36 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < >>>>>>>>> gilberto.nunes32 at gmail.com> escreveu: >>>>>>>>> >>>>>>>>>> Hi there >>>>>>>>>> >>>>>>>>>> I got a strage error last night. Vzdump complain about the >>>>>>>>>> disk no exist or lvm volume in this case but the volume exist, >>>> indeed! >>>>>>>>>> In the morning I have do a manually backup and it's working >> fine... >>>>>>>>>> Any advice? >>>>>>>>>> >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: status = running >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' >>>>>>>> 'local-lvm:vm-112-disk-0' 120G >>>>>>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no such >>>>>>>> volume 'local-lvm:vm-112-disk-0' >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: status = running >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' >>>>>>>> 'local-lvm:vm-116-disk-0' 100G >>>>>>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no such >>>>>>>> volume 'local-lvm:vm-116-disk-0' >>>>>>>>>> --- >>>>>>>>>> Gilberto Nunes Ferreira >>>>>>>>>> >>>>>>>>>> (47) 3025-5907 >>>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram >>>>>>>>>> >>>>>>>>>> Skype: gilberto.nunes36 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> pve-user mailing list >>>>>>>>> pve-user at pve.proxmox.com >>>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>>>>> -- >>>>>>>> Zuzendari Teknikoa / Director T?cnico >>>>>>>> Binovo IT Human Project, S.L. >>>>>>>> Telf. 943569206 >>>>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >>>>>>>> www.binovo.es >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> pve-user mailing list >>>>>>>> pve-user at pve.proxmox.com >>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> pve-user mailing list >>>>>>> pve-user at pve.proxmox.com >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>>> -- >>>>>> Zuzendari Teknikoa / Director T?cnico >>>>>> Binovo IT Human Project, S.L. >>>>>> Telf. 943569206 >>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >>>>>> www.binovo.es >>>>>> >>>>>> _______________________________________________ >>>>>> pve-user mailing list >>>>>> pve-user at pve.proxmox.com >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>>> >>>>> _______________________________________________ >>>>> pve-user mailing list >>>>> pve-user at pve.proxmox.com >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> -- >>>> Zuzendari Teknikoa / Director T?cnico >>>> Binovo IT Human Project, S.L. >>>> Telf. 943569206 >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >>>> www.binovo.es >>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at pve.proxmox.com >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> -- >> Zuzendari Teknikoa / Director T?cnico >> Binovo IT Human Project, S.L. >> Telf. 943569206 >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >> www.binovo.es >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gaio at sv.lnf.it Fri Feb 14 10:04:09 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Fri, 14 Feb 2020 10:04:09 +0100 Subject: [PVE-User] Network interfaces renaming strangeness... In-Reply-To: <2057311173.3250195.1581594880028.JavaMail.zimbra@odiso.com> References: <20200213104747.GL3460@sv.lnf.it> <333377406.3248453.1581591329907.JavaMail.zimbra@odiso.com> <20200213113249.GN3460@sv.lnf.it> <2057311173.3250195.1581594880028.JavaMail.zimbra@odiso.com> Message-ID: <20200214090409.GI3590@sv.lnf.it> Mandi! Alexandre DERUMIER In chel di` si favelave... > can you try to edit: > /usr/lib/systemd/network/99-default.link Mmmh.. it is not a good practice edit systemd config file, but create an override in /etc/systemd... Anyway, looking at systemd bygreport you post later, i've done: root at ino:~# cat /etc/systemd/network/10-e1000e-quirks.link [Match] Driver=e1000e [Link] NamePolicy=path rebuild initrd, rebot and now my network card is called 'enp16s0' (yes, in the meantime i've moved the card in another pcie slot ;). I don't know if this is worth a note in pve wiki, anyway, thanks to all. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From gilberto.nunes32 at gmail.com Fri Feb 14 15:33:50 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Fri, 14 Feb 2020 11:33:50 -0300 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: <754b8bc0-419b-d9e5-0c11-400b25e1d916@lightspeed.ca> References: <0eaec1f1-f6e9-1c58-2595-dced38bf9932@binovo.es> <754b8bc0-419b-d9e5-0c11-400b25e1d916@lightspeed.ca> Message-ID: HI guys Some problem but with two different vms... I also update Proxmox still in 5.x series, but no changes... Now this problem ocurrs twice, one night after other... I am very concerned about it! Please, Proxmox staff, is there something I can do to solve this issue? Anybody alread do a bugzilla??? Thanks --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 13 de fev. de 2020 ?s 19:53, Atila Vasconcelos < atilav at lightspeed.ca> escreveu: > Hi, > > I had the same problem in the past and it repeats once a while.... its > very random; I could not find any way to reproduce it. > > But as it happens... it will go away. > > When you are almost forgetting about it, it will come again ;) > > I just learned to ignore it (and do manually the backup when it fails) > > I see in proxmox 6.x it is less frequent (but still happening once a > while). > > > ABV > > > On 2020-02-13 4:42 a.m., Gilberto Nunes wrote: > > Yeah! Me too... This problem is pretty random... Let see next week! > > --- > > Gilberto Nunes Ferreira > > > > (47) 3025-5907 > > (47) 99676-7530 - Whatsapp / Telegram > > > > Skype: gilberto.nunes36 > > > > > > > > > > > > Em qui., 13 de fev. de 2020 ?s 09:29, Eneko Lacunza > > escreveu: > > > >> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of ideas > >> now, sorry!!! ;) > >> > >> El 13/2/20 a las 13:24, Gilberto Nunes escribi?: > >>> I can assure you... the disk is there! > >>> > >>> pvesm list local-lvm > >>> local-lvm:vm-101-disk-0 raw 53687091200 101 > >>> local-lvm:vm-102-disk-0 raw 536870912000 102 > >>> local-lvm:vm-103-disk-0 raw 322122547200 103 > >>> local-lvm:vm-104-disk-0 raw 214748364800 104 > >>> local-lvm:vm-104-state-LUKPLAS raw 17704157184 104 > >>> local-lvm:vm-105-disk-0 raw 751619276800 105 > >>> local-lvm:vm-106-disk-0 raw 161061273600 106 > >>> local-lvm:vm-107-disk-0 raw 536870912000 107 > >>> local-lvm:vm-108-disk-0 raw 214748364800 108 > >>> local-lvm:vm-109-disk-0 raw 107374182400 109 > >>> local-lvm:vm-110-disk-0 raw 107374182400 110 > >>> local-lvm:vm-111-disk-0 raw 107374182400 111 > >>> local-lvm:vm-112-disk-0 raw 128849018880 112 > >>> local-lvm:vm-113-disk-0 raw 53687091200 113 > >>> local-lvm:vm-113-state-antes_balloon raw 17704157184 113 > >>> local-lvm:vm-114-disk-0 raw 128849018880 114 > >>> local-lvm:vm-115-disk-0 raw 107374182400 115 > >>> local-lvm:vm-115-disk-1 raw 53687091200 115 > >>> local-lvm:vm-116-disk-0 raw 107374182400 116 > >>> local-lvm:vm-117-disk-0 raw 107374182400 117 > >>> local-lvm:vm-118-disk-0 raw 107374182400 118 > >>> local-lvm:vm-119-disk-0 raw 26843545600 119 > >>> local-lvm:vm-121-disk-0 raw 107374182400 121 > >>> local-lvm:vm-122-disk-0 raw 107374182400 122 > >>> local-lvm:vm-123-disk-0 raw 161061273600 123 > >>> local-lvm:vm-124-disk-0 raw 107374182400 124 > >>> local-lvm:vm-125-disk-0 raw 53687091200 125 > >>> local-lvm:vm-126-disk-0 raw 32212254720 126 > >>> local-lvm:vm-127-disk-0 raw 53687091200 127 > >>> local-lvm:vm-129-disk-0 raw 21474836480 129 > >>> > >>> ls -l /dev/pve/vm-110-disk-0 > >>> lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> > ../dm-15 > >>> > >>> > >>> --- > >>> Gilberto Nunes Ferreira > >>> > >>> (47) 3025-5907 > >>> (47) 99676-7530 - Whatsapp / Telegram > >>> > >>> Skype: gilberto.nunes36 > >>> > >>> > >>> > >>> > >>> > >>> Em qui., 13 de fev. de 2020 ?s 09:19, Eneko Lacunza < > elacunza at binovo.es> > >>> escreveu: > >>> > >>>> What about: > >>>> > >>>> pvesm list local-lvm > >>>> ls -l /dev/pve/vm-110-disk-0 > >>>> > >>>> El 13/2/20 a las 12:40, Gilberto Nunes escribi?: > >>>>> Quite strange to say the least > >>>>> > >>>>> > >>>>> ls /dev/pve/* > >>>>> /dev/pve/root /dev/pve/vm-109-disk-0 > >>>>> /dev/pve/vm-118-disk-0 > >>>>> /dev/pve/swap /dev/pve/vm-110-disk-0 > >>>>> /dev/pve/vm-119-disk-0 > >>>>> /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 > >>>>> /dev/pve/vm-121-disk-0 > >>>>> /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 > >>>>> /dev/pve/vm-122-disk-0 > >>>>> /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 > >>>>> /dev/pve/vm-123-disk-0 > >>>>> /dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon > >>>>> /dev/pve/vm-124-disk-0 > >>>>> /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 > >>>>> /dev/pve/vm-125-disk-0 > >>>>> /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 > >>>>> /dev/pve/vm-126-disk-0 > >>>>> /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 > >>>>> /dev/pve/vm-127-disk-0 > >>>>> /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 > >>>>> /dev/pve/vm-129-disk-0 > >>>>> /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 > >>>>> > >>>>> ls /dev/mapper/ > >>>>> control pve-vm--104--state--LUKPLAS > >>>>> pve-vm--115--disk--1 > >>>>> iscsi-backup pve-vm--105--disk--0 > >>>>> pve-vm--116--disk--0 > >>>>> mpatha pve-vm--106--disk--0 > >>>>> pve-vm--117--disk--0 > >>>>> pve-data pve-vm--107--disk--0 > >>>>> pve-vm--118--disk--0 > >>>>> pve-data_tdata pve-vm--108--disk--0 > >>>>> pve-vm--119--disk--0 > >>>>> pve-data_tmeta pve-vm--109--disk--0 > >>>>> pve-vm--121--disk--0 > >>>>> pve-data-tpool pve-vm--110--disk--0 > >>>>> pve-vm--122--disk--0 > >>>>> pve-root pve-vm--111--disk--0 > >>>>> pve-vm--123--disk--0 > >>>>> pve-swap pve-vm--112--disk--0 > >>>>> pve-vm--124--disk--0 > >>>>> pve-vm--101--disk--0 pve-vm--113--disk--0 > >>>>> pve-vm--125--disk--0 > >>>>> pve-vm--102--disk--0 pve-vm--113--state--antes_balloon > >>>>> pve-vm--126--disk--0 > >>>>> pve-vm--103--disk--0 pve-vm--114--disk--0 > >>>>> pve-vm--127--disk--0 > >>>>> pve-vm--104--disk--0 pve-vm--115--disk--0 > >>>>> pve-vm--129--disk--0 > >>>>> > >>>>> > >>>>> --- > >>>>> Gilberto Nunes Ferreira > >>>>> > >>>>> (47) 3025-5907 > >>>>> (47) 99676-7530 - Whatsapp / Telegram > >>>>> > >>>>> Skype: gilberto.nunes36 > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza < > >> elacunza at binovo.es> > >>>>> escreveu: > >>>>> > >>>>>> It's quite strange, what about "ls /dev/pve/*"? > >>>>>> > >>>>>> El 13/2/20 a las 12:18, Gilberto Nunes escribi?: > >>>>>>> n: Thu Feb 13 07:06:19 2020 > >>>>>>> a2web:~# lvs > >>>>>>> LV VG Attr LSize > Pool > >>>> Origin > >>>>>>> Data% Meta% Move Log Cpy%Sync Convert > >>>>>>> backup iscsi -wi-ao---- 1.61t > >>>>>>> > >>>>>>> data pve twi-aotz-- 3.34t > >>>>>>> 88.21 9.53 > >>>>>>> root pve -wi-ao---- 96.00g > >>>>>>> > >>>>>>> snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g > data > >>>>>>> vm-104-disk-0 > >>>>>>> snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g > data > >>>>>>> vm-113-disk-0 > >>>>>>> swap pve -wi-ao---- 8.00g > >>>>>>> > >>>>>>> vm-101-disk-0 pve Vwi-aotz-- 50.00g > data > >>>>>>> 24.17 > >>>>>>> vm-102-disk-0 pve Vwi-aotz-- 500.00g > data > >>>>>>> 65.65 > >>>>>>> vm-103-disk-0 pve Vwi-aotz-- 300.00g > data > >>>>>>> 37.28 > >>>>>>> vm-104-disk-0 pve Vwi-aotz-- 200.00g > data > >>>>>>> 17.87 > >>>>>>> vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g > data > >>>>>>> 35.53 > >>>>>>> vm-105-disk-0 pve Vwi-aotz-- 700.00g > data > >>>>>>> 90.18 > >>>>>>> vm-106-disk-0 pve Vwi-aotz-- 150.00g > data > >>>>>>> 93.55 > >>>>>>> vm-107-disk-0 pve Vwi-aotz-- 500.00g > data > >>>>>>> 98.20 > >>>>>>> vm-108-disk-0 pve Vwi-aotz-- 200.00g > data > >>>>>>> 98.02 > >>>>>>> vm-109-disk-0 pve Vwi-aotz-- 100.00g > data > >>>>>>> 93.68 > >>>>>>> vm-110-disk-0 pve Vwi-aotz-- 100.00g > data > >>>>>>> 34.55 > >>>>>>> vm-111-disk-0 pve Vwi-aotz-- 100.00g > data > >>>>>>> 79.03 > >>>>>>> vm-112-disk-0 pve Vwi-aotz-- 120.00g > data > >>>>>>> 93.78 > >>>>>>> vm-113-disk-0 pve Vwi-aotz-- 50.00g > data > >>>>>>> 65.42 > >>>>>>> vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g > data > >>>>>>> 43.64 > >>>>>>> vm-114-disk-0 pve Vwi-aotz-- 120.00g > data > >>>>>>> 100.00 > >>>>>>> vm-115-disk-0 pve Vwi-a-tz-- 100.00g > data > >>>>>>> 70.28 > >>>>>>> vm-115-disk-1 pve Vwi-a-tz-- 50.00g > data > >>>>>>> 0.00 > >>>>>>> vm-116-disk-0 pve Vwi-aotz-- 100.00g > data > >>>>>>> 26.34 > >>>>>>> vm-117-disk-0 pve Vwi-aotz-- 100.00g > data > >>>>>>> 100.00 > >>>>>>> vm-118-disk-0 pve Vwi-aotz-- 100.00g > data > >>>>>>> 100.00 > >>>>>>> vm-119-disk-0 pve Vwi-aotz-- 25.00g > data > >>>>>>> 18.42 > >>>>>>> vm-121-disk-0 pve Vwi-aotz-- 100.00g > data > >>>>>>> 23.76 > >>>>>>> vm-122-disk-0 pve Vwi-aotz-- 100.00g > data > >>>>>>> 100.00 > >>>>>>> vm-123-disk-0 pve Vwi-aotz-- 150.00g > data > >>>>>>> 37.89 > >>>>>>> vm-124-disk-0 pve Vwi-aotz-- 100.00g > data > >>>>>>> 30.73 > >>>>>>> vm-125-disk-0 pve Vwi-aotz-- 50.00g > data > >>>>>>> 9.02 > >>>>>>> vm-126-disk-0 pve Vwi-aotz-- 30.00g > data > >>>>>>> 99.72 > >>>>>>> vm-127-disk-0 pve Vwi-aotz-- 50.00g > data > >>>>>>> 10.79 > >>>>>>> vm-129-disk-0 pve Vwi-aotz-- 20.00g > data > >>>>>>> 45.04 > >>>>>>> > >>>>>>> cat /etc/pve/storage.cfg > >>>>>>> dir: local > >>>>>>> path /var/lib/vz > >>>>>>> content backup,iso,vztmpl > >>>>>>> > >>>>>>> lvmthin: local-lvm > >>>>>>> thinpool data > >>>>>>> vgname pve > >>>>>>> content rootdir,images > >>>>>>> > >>>>>>> iscsi: iscsi > >>>>>>> portal some-portal > >>>>>>> target some-target > >>>>>>> content images > >>>>>>> > >>>>>>> lvm: iscsi-lvm > >>>>>>> vgname iscsi > >>>>>>> base iscsi:0.0.0.scsi-mpatha > >>>>>>> content rootdir,images > >>>>>>> shared 1 > >>>>>>> > >>>>>>> dir: backup > >>>>>>> path /backup > >>>>>>> content images,rootdir,iso,backup > >>>>>>> maxfiles 3 > >>>>>>> shared 0 > >>>>>>> --- > >>>>>>> Gilberto Nunes Ferreira > >>>>>>> > >>>>>>> (47) 3025-5907 > >>>>>>> (47) 99676-7530 - Whatsapp / Telegram > >>>>>>> > >>>>>>> Skype: gilberto.nunes36 > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza < > >>>> elacunza at binovo.es> > >>>>>>> escreveu: > >>>>>>> > >>>>>>>> Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? > >>>>>>>> > >>>>>>>> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: > >>>>>>>>> HI all > >>>>>>>>> > >>>>>>>>> Still in trouble with this issue > >>>>>>>>> > >>>>>>>>> cat daemon.log | grep "Feb 12 22:10" > >>>>>>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication > >>>>>>>> runner... > >>>>>>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE replication > >>>>>> runner. > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM > >> 110 > >>>>>>>> (qemu) > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 > >> failed - > >>>>>> no > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' > >>>>>>>>> > >>>>>>>>> syslog > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of VM > >> 110 > >>>>>>>> (qemu) > >>>>>>>>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: -lock > >>>>>> backup > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 > >> failed - > >>>>>> no > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' > >>>>>>>>> > >>>>>>>>> pveversion > >>>>>>>>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) > >>>>>>>>> > >>>>>>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) > >>>>>>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) > >>>>>>>>> pve-kernel-4.15: 5.4-12 > >>>>>>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52 > >>>>>>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36 > >>>>>>>>> corosync: 2.4.4-pve1 > >>>>>>>>> criu: 2.11.1-1~bpo90 > >>>>>>>>> glusterfs-client: 3.8.8-1 > >>>>>>>>> ksm-control-daemon: 1.2-2 > >>>>>>>>> libjs-extjs: 6.0.1-2 > >>>>>>>>> libpve-access-control: 5.1-12 > >>>>>>>>> libpve-apiclient-perl: 2.0-5 > >>>>>>>>> libpve-common-perl: 5.0-56 > >>>>>>>>> libpve-guest-common-perl: 2.0-20 > >>>>>>>>> libpve-http-server-perl: 2.0-14 > >>>>>>>>> libpve-storage-perl: 5.0-44 > >>>>>>>>> libqb0: 1.0.3-1~bpo9 > >>>>>>>>> lvm2: 2.02.168-pve6 > >>>>>>>>> lxc-pve: 3.1.0-7 > >>>>>>>>> lxcfs: 3.0.3-pve1 > >>>>>>>>> novnc-pve: 1.0.0-3 > >>>>>>>>> proxmox-widget-toolkit: 1.0-28 > >>>>>>>>> pve-cluster: 5.0-38 > >>>>>>>>> pve-container: 2.0-41 > >>>>>>>>> pve-docs: 5.4-2 > >>>>>>>>> pve-edk2-firmware: 1.20190312-1 > >>>>>>>>> pve-firewall: 3.0-22 > >>>>>>>>> pve-firmware: 2.0-7 > >>>>>>>>> pve-ha-manager: 2.0-9 > >>>>>>>>> pve-i18n: 1.1-4 > >>>>>>>>> pve-libspice-server1: 0.14.1-2 > >>>>>>>>> pve-qemu-kvm: 3.0.1-4 > >>>>>>>>> pve-xtermjs: 3.12.0-1 > >>>>>>>>> qemu-server: 5.0-55 > >>>>>>>>> smartmontools: 6.5+svn4324-1 > >>>>>>>>> spiceterm: 3.0-5 > >>>>>>>>> vncterm: 1.5-3 > >>>>>>>>> zfsutils-linux: 0.7.13-pve1~bpo2 > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> Some help??? Sould I upgrade the server to 6.x?? > >>>>>>>>> > >>>>>>>>> Thanks > >>>>>>>>> > >>>>>>>>> --- > >>>>>>>>> Gilberto Nunes Ferreira > >>>>>>>>> > >>>>>>>>> (47) 3025-5907 > >>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram > >>>>>>>>> > >>>>>>>>> Skype: gilberto.nunes36 > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < > >>>>>>>>> gilberto.nunes32 at gmail.com> escreveu: > >>>>>>>>> > >>>>>>>>>> Hi there > >>>>>>>>>> > >>>>>>>>>> I got a strage error last night. Vzdump complain about the > >>>>>>>>>> disk no exist or lvm volume in this case but the volume exist, > >>>> indeed! > >>>>>>>>>> In the morning I have do a manually backup and it's working > >> fine... > >>>>>>>>>> Any advice? > >>>>>>>>>> > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 (qemu) > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: status = running > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' > >>>>>>>> 'local-lvm:vm-112-disk-0' 120G > >>>>>>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no > such > >>>>>>>> volume 'local-lvm:vm-112-disk-0' > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 (qemu) > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: status = running > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' > >>>>>>>> 'local-lvm:vm-116-disk-0' 100G > >>>>>>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no > such > >>>>>>>> volume 'local-lvm:vm-116-disk-0' > >>>>>>>>>> --- > >>>>>>>>>> Gilberto Nunes Ferreira > >>>>>>>>>> > >>>>>>>>>> (47) 3025-5907 > >>>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram > >>>>>>>>>> > >>>>>>>>>> Skype: gilberto.nunes36 > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>> _______________________________________________ > >>>>>>>>> pve-user mailing list > >>>>>>>>> pve-user at pve.proxmox.com > >>>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>>>>>> -- > >>>>>>>> Zuzendari Teknikoa / Director T?cnico > >>>>>>>> Binovo IT Human Project, S.L. > >>>>>>>> Telf. 943569206 > >>>>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun > (Gipuzkoa) > >>>>>>>> www.binovo.es > >>>>>>>> > >>>>>>>> _______________________________________________ > >>>>>>>> pve-user mailing list > >>>>>>>> pve-user at pve.proxmox.com > >>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> pve-user mailing list > >>>>>>> pve-user at pve.proxmox.com > >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>>>> -- > >>>>>> Zuzendari Teknikoa / Director T?cnico > >>>>>> Binovo IT Human Project, S.L. > >>>>>> Telf. 943569206 > >>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >>>>>> www.binovo.es > >>>>>> > >>>>>> _______________________________________________ > >>>>>> pve-user mailing list > >>>>>> pve-user at pve.proxmox.com > >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>>>> > >>>>> _______________________________________________ > >>>>> pve-user mailing list > >>>>> pve-user at pve.proxmox.com > >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>> -- > >>>> Zuzendari Teknikoa / Director T?cnico > >>>> Binovo IT Human Project, S.L. > >>>> Telf. 943569206 > >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >>>> www.binovo.es > >>>> > >>>> _______________________________________________ > >>>> pve-user mailing list > >>>> pve-user at pve.proxmox.com > >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>> > >>> _______________________________________________ > >>> pve-user mailing list > >>> pve-user at pve.proxmox.com > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > >> -- > >> Zuzendari Teknikoa / Director T?cnico > >> Binovo IT Human Project, S.L. > >> Telf. 943569206 > >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >> www.binovo.es > >> > >> _______________________________________________ > >> pve-user mailing list > >> pve-user at pve.proxmox.com > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gianni.milo22 at gmail.com Fri Feb 14 18:20:55 2020 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Fri, 14 Feb 2020 17:20:55 +0000 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: <0eaec1f1-f6e9-1c58-2595-dced38bf9932@binovo.es> <754b8bc0-419b-d9e5-0c11-400b25e1d916@lightspeed.ca> Message-ID: If it's happening randomly, my best guess would be that it might be related to high i/o during the time frame that the backup takes place. Have you tried creating multiple backup schedules which will take place at different times ? Setting backup bandwidth limits might also help. Check the PVE administration guide for more details on this. You could check for any clues in syslog during the time that the failed backup takes place as well. G. On Fri, 14 Feb 2020 at 14:35, Gilberto Nunes wrote: > HI guys > > Some problem but with two different vms... > I also update Proxmox still in 5.x series, but no changes... Now this > problem ocurrs twice, one night after other... > I am very concerned about it! > Please, Proxmox staff, is there something I can do to solve this issue? > Anybody alread do a bugzilla??? > > Thanks > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qui., 13 de fev. de 2020 ?s 19:53, Atila Vasconcelos < > atilav at lightspeed.ca> escreveu: > > > Hi, > > > > I had the same problem in the past and it repeats once a while.... its > > very random; I could not find any way to reproduce it. > > > > But as it happens... it will go away. > > > > When you are almost forgetting about it, it will come again ;) > > > > I just learned to ignore it (and do manually the backup when it fails) > > > > I see in proxmox 6.x it is less frequent (but still happening once a > > while). > > > > > > ABV > > > > > > On 2020-02-13 4:42 a.m., Gilberto Nunes wrote: > > > Yeah! Me too... This problem is pretty random... Let see next week! > > > --- > > > Gilberto Nunes Ferreira > > > > > > (47) 3025-5907 > > > (47) 99676-7530 - Whatsapp / Telegram > > > > > > Skype: gilberto.nunes36 > > > > > > > > > > > > > > > > > > Em qui., 13 de fev. de 2020 ?s 09:29, Eneko Lacunza < > elacunza at binovo.es> > > > escreveu: > > > > > >> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of > ideas > > >> now, sorry!!! ;) > > >> > > >> El 13/2/20 a las 13:24, Gilberto Nunes escribi?: > > >>> I can assure you... the disk is there! > > >>> > > >>> pvesm list local-lvm > > >>> local-lvm:vm-101-disk-0 raw 53687091200 101 > > >>> local-lvm:vm-102-disk-0 raw 536870912000 102 > > >>> local-lvm:vm-103-disk-0 raw 322122547200 103 > > >>> local-lvm:vm-104-disk-0 raw 214748364800 104 > > >>> local-lvm:vm-104-state-LUKPLAS raw 17704157184 104 > > >>> local-lvm:vm-105-disk-0 raw 751619276800 105 > > >>> local-lvm:vm-106-disk-0 raw 161061273600 106 > > >>> local-lvm:vm-107-disk-0 raw 536870912000 107 > > >>> local-lvm:vm-108-disk-0 raw 214748364800 108 > > >>> local-lvm:vm-109-disk-0 raw 107374182400 109 > > >>> local-lvm:vm-110-disk-0 raw 107374182400 110 > > >>> local-lvm:vm-111-disk-0 raw 107374182400 111 > > >>> local-lvm:vm-112-disk-0 raw 128849018880 112 > > >>> local-lvm:vm-113-disk-0 raw 53687091200 113 > > >>> local-lvm:vm-113-state-antes_balloon raw 17704157184 113 > > >>> local-lvm:vm-114-disk-0 raw 128849018880 114 > > >>> local-lvm:vm-115-disk-0 raw 107374182400 115 > > >>> local-lvm:vm-115-disk-1 raw 53687091200 115 > > >>> local-lvm:vm-116-disk-0 raw 107374182400 116 > > >>> local-lvm:vm-117-disk-0 raw 107374182400 117 > > >>> local-lvm:vm-118-disk-0 raw 107374182400 118 > > >>> local-lvm:vm-119-disk-0 raw 26843545600 119 > > >>> local-lvm:vm-121-disk-0 raw 107374182400 121 > > >>> local-lvm:vm-122-disk-0 raw 107374182400 122 > > >>> local-lvm:vm-123-disk-0 raw 161061273600 123 > > >>> local-lvm:vm-124-disk-0 raw 107374182400 124 > > >>> local-lvm:vm-125-disk-0 raw 53687091200 125 > > >>> local-lvm:vm-126-disk-0 raw 32212254720 126 > > >>> local-lvm:vm-127-disk-0 raw 53687091200 127 > > >>> local-lvm:vm-129-disk-0 raw 21474836480 129 > > >>> > > >>> ls -l /dev/pve/vm-110-disk-0 > > >>> lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> > > ../dm-15 > > >>> > > >>> > > >>> --- > > >>> Gilberto Nunes Ferreira > > >>> > > >>> (47) 3025-5907 > > >>> (47) 99676-7530 - Whatsapp / Telegram > > >>> > > >>> Skype: gilberto.nunes36 > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> Em qui., 13 de fev. de 2020 ?s 09:19, Eneko Lacunza < > > elacunza at binovo.es> > > >>> escreveu: > > >>> > > >>>> What about: > > >>>> > > >>>> pvesm list local-lvm > > >>>> ls -l /dev/pve/vm-110-disk-0 > > >>>> > > >>>> El 13/2/20 a las 12:40, Gilberto Nunes escribi?: > > >>>>> Quite strange to say the least > > >>>>> > > >>>>> > > >>>>> ls /dev/pve/* > > >>>>> /dev/pve/root /dev/pve/vm-109-disk-0 > > >>>>> /dev/pve/vm-118-disk-0 > > >>>>> /dev/pve/swap /dev/pve/vm-110-disk-0 > > >>>>> /dev/pve/vm-119-disk-0 > > >>>>> /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 > > >>>>> /dev/pve/vm-121-disk-0 > > >>>>> /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 > > >>>>> /dev/pve/vm-122-disk-0 > > >>>>> /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 > > >>>>> /dev/pve/vm-123-disk-0 > > >>>>> /dev/pve/vm-104-disk-0 /dev/pve/vm-113-state-antes_balloon > > >>>>> /dev/pve/vm-124-disk-0 > > >>>>> /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 > > >>>>> /dev/pve/vm-125-disk-0 > > >>>>> /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 > > >>>>> /dev/pve/vm-126-disk-0 > > >>>>> /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 > > >>>>> /dev/pve/vm-127-disk-0 > > >>>>> /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 > > >>>>> /dev/pve/vm-129-disk-0 > > >>>>> /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 > > >>>>> > > >>>>> ls /dev/mapper/ > > >>>>> control pve-vm--104--state--LUKPLAS > > >>>>> pve-vm--115--disk--1 > > >>>>> iscsi-backup pve-vm--105--disk--0 > > >>>>> pve-vm--116--disk--0 > > >>>>> mpatha pve-vm--106--disk--0 > > >>>>> pve-vm--117--disk--0 > > >>>>> pve-data pve-vm--107--disk--0 > > >>>>> pve-vm--118--disk--0 > > >>>>> pve-data_tdata pve-vm--108--disk--0 > > >>>>> pve-vm--119--disk--0 > > >>>>> pve-data_tmeta pve-vm--109--disk--0 > > >>>>> pve-vm--121--disk--0 > > >>>>> pve-data-tpool pve-vm--110--disk--0 > > >>>>> pve-vm--122--disk--0 > > >>>>> pve-root pve-vm--111--disk--0 > > >>>>> pve-vm--123--disk--0 > > >>>>> pve-swap pve-vm--112--disk--0 > > >>>>> pve-vm--124--disk--0 > > >>>>> pve-vm--101--disk--0 pve-vm--113--disk--0 > > >>>>> pve-vm--125--disk--0 > > >>>>> pve-vm--102--disk--0 pve-vm--113--state--antes_balloon > > >>>>> pve-vm--126--disk--0 > > >>>>> pve-vm--103--disk--0 pve-vm--114--disk--0 > > >>>>> pve-vm--127--disk--0 > > >>>>> pve-vm--104--disk--0 pve-vm--115--disk--0 > > >>>>> pve-vm--129--disk--0 > > >>>>> > > >>>>> > > >>>>> --- > > >>>>> Gilberto Nunes Ferreira > > >>>>> > > >>>>> (47) 3025-5907 > > >>>>> (47) 99676-7530 - Whatsapp / Telegram > > >>>>> > > >>>>> Skype: gilberto.nunes36 > > >>>>> > > >>>>> > > >>>>> > > >>>>> > > >>>>> > > >>>>> Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza < > > >> elacunza at binovo.es> > > >>>>> escreveu: > > >>>>> > > >>>>>> It's quite strange, what about "ls /dev/pve/*"? > > >>>>>> > > >>>>>> El 13/2/20 a las 12:18, Gilberto Nunes escribi?: > > >>>>>>> n: Thu Feb 13 07:06:19 2020 > > >>>>>>> a2web:~# lvs > > >>>>>>> LV VG Attr LSize > > Pool > > >>>> Origin > > >>>>>>> Data% Meta% Move Log Cpy%Sync Convert > > >>>>>>> backup iscsi -wi-ao---- 1.61t > > >>>>>>> > > >>>>>>> data pve twi-aotz-- 3.34t > > >>>>>>> 88.21 9.53 > > >>>>>>> root pve -wi-ao---- 96.00g > > >>>>>>> > > >>>>>>> snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g > > data > > >>>>>>> vm-104-disk-0 > > >>>>>>> snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g > > data > > >>>>>>> vm-113-disk-0 > > >>>>>>> swap pve -wi-ao---- 8.00g > > >>>>>>> > > >>>>>>> vm-101-disk-0 pve Vwi-aotz-- 50.00g > > data > > >>>>>>> 24.17 > > >>>>>>> vm-102-disk-0 pve Vwi-aotz-- 500.00g > > data > > >>>>>>> 65.65 > > >>>>>>> vm-103-disk-0 pve Vwi-aotz-- 300.00g > > data > > >>>>>>> 37.28 > > >>>>>>> vm-104-disk-0 pve Vwi-aotz-- 200.00g > > data > > >>>>>>> 17.87 > > >>>>>>> vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g > > data > > >>>>>>> 35.53 > > >>>>>>> vm-105-disk-0 pve Vwi-aotz-- 700.00g > > data > > >>>>>>> 90.18 > > >>>>>>> vm-106-disk-0 pve Vwi-aotz-- 150.00g > > data > > >>>>>>> 93.55 > > >>>>>>> vm-107-disk-0 pve Vwi-aotz-- 500.00g > > data > > >>>>>>> 98.20 > > >>>>>>> vm-108-disk-0 pve Vwi-aotz-- 200.00g > > data > > >>>>>>> 98.02 > > >>>>>>> vm-109-disk-0 pve Vwi-aotz-- 100.00g > > data > > >>>>>>> 93.68 > > >>>>>>> vm-110-disk-0 pve Vwi-aotz-- 100.00g > > data > > >>>>>>> 34.55 > > >>>>>>> vm-111-disk-0 pve Vwi-aotz-- 100.00g > > data > > >>>>>>> 79.03 > > >>>>>>> vm-112-disk-0 pve Vwi-aotz-- 120.00g > > data > > >>>>>>> 93.78 > > >>>>>>> vm-113-disk-0 pve Vwi-aotz-- 50.00g > > data > > >>>>>>> 65.42 > > >>>>>>> vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g > > data > > >>>>>>> 43.64 > > >>>>>>> vm-114-disk-0 pve Vwi-aotz-- 120.00g > > data > > >>>>>>> 100.00 > > >>>>>>> vm-115-disk-0 pve Vwi-a-tz-- 100.00g > > data > > >>>>>>> 70.28 > > >>>>>>> vm-115-disk-1 pve Vwi-a-tz-- 50.00g > > data > > >>>>>>> 0.00 > > >>>>>>> vm-116-disk-0 pve Vwi-aotz-- 100.00g > > data > > >>>>>>> 26.34 > > >>>>>>> vm-117-disk-0 pve Vwi-aotz-- 100.00g > > data > > >>>>>>> 100.00 > > >>>>>>> vm-118-disk-0 pve Vwi-aotz-- 100.00g > > data > > >>>>>>> 100.00 > > >>>>>>> vm-119-disk-0 pve Vwi-aotz-- 25.00g > > data > > >>>>>>> 18.42 > > >>>>>>> vm-121-disk-0 pve Vwi-aotz-- 100.00g > > data > > >>>>>>> 23.76 > > >>>>>>> vm-122-disk-0 pve Vwi-aotz-- 100.00g > > data > > >>>>>>> 100.00 > > >>>>>>> vm-123-disk-0 pve Vwi-aotz-- 150.00g > > data > > >>>>>>> 37.89 > > >>>>>>> vm-124-disk-0 pve Vwi-aotz-- 100.00g > > data > > >>>>>>> 30.73 > > >>>>>>> vm-125-disk-0 pve Vwi-aotz-- 50.00g > > data > > >>>>>>> 9.02 > > >>>>>>> vm-126-disk-0 pve Vwi-aotz-- 30.00g > > data > > >>>>>>> 99.72 > > >>>>>>> vm-127-disk-0 pve Vwi-aotz-- 50.00g > > data > > >>>>>>> 10.79 > > >>>>>>> vm-129-disk-0 pve Vwi-aotz-- 20.00g > > data > > >>>>>>> 45.04 > > >>>>>>> > > >>>>>>> cat /etc/pve/storage.cfg > > >>>>>>> dir: local > > >>>>>>> path /var/lib/vz > > >>>>>>> content backup,iso,vztmpl > > >>>>>>> > > >>>>>>> lvmthin: local-lvm > > >>>>>>> thinpool data > > >>>>>>> vgname pve > > >>>>>>> content rootdir,images > > >>>>>>> > > >>>>>>> iscsi: iscsi > > >>>>>>> portal some-portal > > >>>>>>> target some-target > > >>>>>>> content images > > >>>>>>> > > >>>>>>> lvm: iscsi-lvm > > >>>>>>> vgname iscsi > > >>>>>>> base iscsi:0.0.0.scsi-mpatha > > >>>>>>> content rootdir,images > > >>>>>>> shared 1 > > >>>>>>> > > >>>>>>> dir: backup > > >>>>>>> path /backup > > >>>>>>> content images,rootdir,iso,backup > > >>>>>>> maxfiles 3 > > >>>>>>> shared 0 > > >>>>>>> --- > > >>>>>>> Gilberto Nunes Ferreira > > >>>>>>> > > >>>>>>> (47) 3025-5907 > > >>>>>>> (47) 99676-7530 - Whatsapp / Telegram > > >>>>>>> > > >>>>>>> Skype: gilberto.nunes36 > > >>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza < > > >>>> elacunza at binovo.es> > > >>>>>>> escreveu: > > >>>>>>> > > >>>>>>>> Can you send the output for "lvs" and "cat > /etc/pve/storage.cfg"? > > >>>>>>>> > > >>>>>>>> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: > > >>>>>>>>> HI all > > >>>>>>>>> > > >>>>>>>>> Still in trouble with this issue > > >>>>>>>>> > > >>>>>>>>> cat daemon.log | grep "Feb 12 22:10" > > >>>>>>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE > replication > > >>>>>>>> runner... > > >>>>>>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE > replication > > >>>>>> runner. > > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of > VM > > >> 110 > > >>>>>>>> (qemu) > > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 > > >> failed - > > >>>>>> no > > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' > > >>>>>>>>> > > >>>>>>>>> syslog > > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of > VM > > >> 110 > > >>>>>>>> (qemu) > > >>>>>>>>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: > -lock > > >>>>>> backup > > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 > > >> failed - > > >>>>>> no > > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' > > >>>>>>>>> > > >>>>>>>>> pveversion > > >>>>>>>>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) > > >>>>>>>>> > > >>>>>>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) > > >>>>>>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) > > >>>>>>>>> pve-kernel-4.15: 5.4-12 > > >>>>>>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52 > > >>>>>>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36 > > >>>>>>>>> corosync: 2.4.4-pve1 > > >>>>>>>>> criu: 2.11.1-1~bpo90 > > >>>>>>>>> glusterfs-client: 3.8.8-1 > > >>>>>>>>> ksm-control-daemon: 1.2-2 > > >>>>>>>>> libjs-extjs: 6.0.1-2 > > >>>>>>>>> libpve-access-control: 5.1-12 > > >>>>>>>>> libpve-apiclient-perl: 2.0-5 > > >>>>>>>>> libpve-common-perl: 5.0-56 > > >>>>>>>>> libpve-guest-common-perl: 2.0-20 > > >>>>>>>>> libpve-http-server-perl: 2.0-14 > > >>>>>>>>> libpve-storage-perl: 5.0-44 > > >>>>>>>>> libqb0: 1.0.3-1~bpo9 > > >>>>>>>>> lvm2: 2.02.168-pve6 > > >>>>>>>>> lxc-pve: 3.1.0-7 > > >>>>>>>>> lxcfs: 3.0.3-pve1 > > >>>>>>>>> novnc-pve: 1.0.0-3 > > >>>>>>>>> proxmox-widget-toolkit: 1.0-28 > > >>>>>>>>> pve-cluster: 5.0-38 > > >>>>>>>>> pve-container: 2.0-41 > > >>>>>>>>> pve-docs: 5.4-2 > > >>>>>>>>> pve-edk2-firmware: 1.20190312-1 > > >>>>>>>>> pve-firewall: 3.0-22 > > >>>>>>>>> pve-firmware: 2.0-7 > > >>>>>>>>> pve-ha-manager: 2.0-9 > > >>>>>>>>> pve-i18n: 1.1-4 > > >>>>>>>>> pve-libspice-server1: 0.14.1-2 > > >>>>>>>>> pve-qemu-kvm: 3.0.1-4 > > >>>>>>>>> pve-xtermjs: 3.12.0-1 > > >>>>>>>>> qemu-server: 5.0-55 > > >>>>>>>>> smartmontools: 6.5+svn4324-1 > > >>>>>>>>> spiceterm: 3.0-5 > > >>>>>>>>> vncterm: 1.5-3 > > >>>>>>>>> zfsutils-linux: 0.7.13-pve1~bpo2 > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> Some help??? Sould I upgrade the server to 6.x?? > > >>>>>>>>> > > >>>>>>>>> Thanks > > >>>>>>>>> > > >>>>>>>>> --- > > >>>>>>>>> Gilberto Nunes Ferreira > > >>>>>>>>> > > >>>>>>>>> (47) 3025-5907 > > >>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram > > >>>>>>>>> > > >>>>>>>>> Skype: gilberto.nunes36 > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < > > >>>>>>>>> gilberto.nunes32 at gmail.com> escreveu: > > >>>>>>>>> > > >>>>>>>>>> Hi there > > >>>>>>>>>> > > >>>>>>>>>> I got a strage error last night. Vzdump complain about the > > >>>>>>>>>> disk no exist or lvm volume in this case but the volume exist, > > >>>> indeed! > > >>>>>>>>>> In the morning I have do a manually backup and it's working > > >> fine... > > >>>>>>>>>> Any advice? > > >>>>>>>>>> > > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 > (qemu) > > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: status = running > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' > > >>>>>>>> 'local-lvm:vm-112-disk-0' 120G > > >>>>>>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no > > such > > >>>>>>>> volume 'local-lvm:vm-112-disk-0' > > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 > (qemu) > > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: status = running > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' > > >>>>>>>> 'local-lvm:vm-116-disk-0' 100G > > >>>>>>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no > > such > > >>>>>>>> volume 'local-lvm:vm-116-disk-0' > > >>>>>>>>>> --- > > >>>>>>>>>> Gilberto Nunes Ferreira > > >>>>>>>>>> > > >>>>>>>>>> (47) 3025-5907 > > >>>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram > > >>>>>>>>>> > > >>>>>>>>>> Skype: gilberto.nunes36 > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>> _______________________________________________ > > >>>>>>>>> pve-user mailing list > > >>>>>>>>> pve-user at pve.proxmox.com > > >>>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > >>>>>>>> -- > > >>>>>>>> Zuzendari Teknikoa / Director T?cnico > > >>>>>>>> Binovo IT Human Project, S.L. > > >>>>>>>> Telf. 943569206 > > >>>>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun > > (Gipuzkoa) > > >>>>>>>> www.binovo.es > > >>>>>>>> > > >>>>>>>> _______________________________________________ > > >>>>>>>> pve-user mailing list > > >>>>>>>> pve-user at pve.proxmox.com > > >>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > >>>>>>>> > > >>>>>>> _______________________________________________ > > >>>>>>> pve-user mailing list > > >>>>>>> pve-user at pve.proxmox.com > > >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > >>>>>> -- > > >>>>>> Zuzendari Teknikoa / Director T?cnico > > >>>>>> Binovo IT Human Project, S.L. > > >>>>>> Telf. 943569206 > > >>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun > (Gipuzkoa) > > >>>>>> www.binovo.es > > >>>>>> > > >>>>>> _______________________________________________ > > >>>>>> pve-user mailing list > > >>>>>> pve-user at pve.proxmox.com > > >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > >>>>>> > > >>>>> _______________________________________________ > > >>>>> pve-user mailing list > > >>>>> pve-user at pve.proxmox.com > > >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > >>>> -- > > >>>> Zuzendari Teknikoa / Director T?cnico > > >>>> Binovo IT Human Project, S.L. > > >>>> Telf. 943569206 > > >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > > >>>> www.binovo.es > > >>>> > > >>>> _______________________________________________ > > >>>> pve-user mailing list > > >>>> pve-user at pve.proxmox.com > > >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > >>>> > > >>> _______________________________________________ > > >>> pve-user mailing list > > >>> pve-user at pve.proxmox.com > > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > >> > > >> -- > > >> Zuzendari Teknikoa / Director T?cnico > > >> Binovo IT Human Project, S.L. > > >> Telf. 943569206 > > >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > > >> www.binovo.es > > >> > > >> _______________________________________________ > > >> pve-user mailing list > > >> pve-user at pve.proxmox.com > > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > >> > > > _______________________________________________ > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From f.thommen at dkfz-heidelberg.de Fri Feb 14 23:57:07 2020 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Fri, 14 Feb 2020 23:57:07 +0100 Subject: [PVE-User] Why are images and rootdir not supported on CephFS? Message-ID: Dear all, the PVE documentation on https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_types says, that "[File level based storage technologies] allow you to store content of any type". However I found now - after having combined all available disks in a big CephFS setup - that this is not true for CephFS, which does not support images and rootdir. What is the reason, that of all filesystens, CephFS doesn't support the main PVE content type? :-) Ceph RPD on the other hand, doesn't support backup. This seems to rule our Ceph (in whatever variant) as an unifying, shared storage for PVE. Or do I miss an important point here? Frank From gianni.milo22 at gmail.com Sat Feb 15 00:38:03 2020 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Fri, 14 Feb 2020 23:38:03 +0000 Subject: [PVE-User] Why are images and rootdir not supported on CephFS? In-Reply-To: References: Message-ID: This has been discussed in the past, see the post below for some answers... https://www.mail-archive.com/pve-user at pve.proxmox.com/msg10160.html On Fri, 14 Feb 2020 at 22:57, Frank Thommen wrote: > Dear all, > > the PVE documentation on > https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_types says, > that "[File level based storage technologies] allow you to store content > of any type". However I found now - after having combined all available > disks in a big CephFS setup - that this is not true for CephFS, which > does not support images and rootdir. > > What is the reason, that of all filesystens, CephFS doesn't support the > main PVE content type? :-) > > Ceph RPD on the other hand, doesn't support backup. > > This seems to rule our Ceph (in whatever variant) as an unifying, shared > storage for PVE. Or do I miss an important point here? > > Frank > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From devzero at web.de Sun Feb 16 11:28:52 2020 From: devzero at web.de (Roland @web.de) Date: Sun, 16 Feb 2020 11:28:52 +0100 Subject: [PVE-User] offline VM migration node1->node2 with local storage Message-ID: <04162270-7d68-cdfd-fa4b-e1adc09c85ed@web.de> hello, why do i need to have the same local storage name when migrating a vm from node1 to node2 in dual-node cluster with local disks ? i'm curious that migration is possible in online state (which is much more complex/challenging task) without a problem, but offline i get "storage is not available on selected target" (because there are differenz zfs pools on both machines) i guess there is no real technical hurdle, it just needs to get implemented appropriatley !? regards roland From devzero at web.de Sun Feb 16 11:53:50 2020 From: devzero at web.de (Roland @web.de) Date: Sun, 16 Feb 2020 11:53:50 +0100 Subject: [PVE-User] offline VM migration node1->node2 with local storage In-Reply-To: <04162270-7d68-cdfd-fa4b-e1adc09c85ed@web.de> References: <04162270-7d68-cdfd-fa4b-e1adc09c85ed@web.de> Message-ID: <7f5fe767-487c-a2c4-ba19-b3f91d544f49@web.de> for my curiousity, in a cluster i can have "rpool" on both nodes (both are local to each node, i.e. are different pools but with same name) and i can blazingly fast offline migrate VMs from rpool on node1 to rpool on node2 (as it's done via zfs snapshot). but i cannot add a second pool "hddpool" on different disks on both nodes with gui, as i get error "storage ID 'hddpool' already defined (500)" if i want to add 2nd zfs pool to node2 with the same name. what i can do is adding pool via commandline on node2 and then remove "nodes node1" from strorage.cfg (manual edit) to make it analogous to rpool - i can use hddpool like rpool after that. i wonder if this is the proper "way to go"!? anybody running proxmox cluster without shared storage as larger scale? what i try is developing a concept for running a maintainable vm infrastructure without need for HA shared storage&networking (as we have vm live migration including disks and on server defect, i simply could pull all disks out and put into a spare machine of same type) regards Roland Am 16.02.20 um 11:28 schrieb Roland @web.de: > hello, > > why do i need to have the same local storage name when migrating a vm > from node1 to node2 in dual-node cluster with local disks ? > > i'm curious that migration is possible in online state (which is much > more complex/challenging task) without a problem, but offline i get > "storage is not available on selected target" (because there are > differenz zfs pools on both machines) > > i guess there is no real technical hurdle, it just needs to get > implemented appropriatley !? > > regards > roland > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From f.thommen at dkfz-heidelberg.de Sun Feb 16 17:26:01 2020 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Sun, 16 Feb 2020 17:26:01 +0100 Subject: [PVE-User] Why are images and rootdir not supported on CephFS? In-Reply-To: References: Message-ID: <38c91d67-cfdf-7da3-f0f0-cd88dc95559d@dkfz-heidelberg.de> Thank you for the link. Even though Fabian Gruenbichler writes in the bugreport (https://bugzilla.proxmox.com/show_bug.cgi?id=2490#c2) that Ceph RBD offers all features of CephFS, this doesn't seem to be true(?), as CephFS supports "vztmpl iso backup snippets" and Ceph RBD "images rootdir" (https://pve.proxmox.com/pve-docs/chapter-pvesm.html), so these two storage types are complementary and RBD cannot replace CephFS. What would be a good practice (CephFS and RBD are already set up): Create an RBD storage on the same (PVE based) Ceph storage that already has CephFS on top of it and use one for templates and backups and the other for images and rootdir? Won't it create problems when using the same Ceph pool with CephFS /and/ RBD (this is probably rather a Ceph question, though) Additionally this might create problems with our inhouse tape backup, as I don't think it supports backing up object storage... frank On 15/02/2020 00:38, Gianni Milo wrote: > This has been discussed in the past, see the post below for some answers... > > https://www.mail-archive.com/pve-user at pve.proxmox.com/msg10160.html > > > > > On Fri, 14 Feb 2020 at 22:57, Frank Thommen > wrote: > >> Dear all, >> >> the PVE documentation on >> https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_types says, >> that "[File level based storage technologies] allow you to store content >> of any type". However I found now - after having combined all available >> disks in a big CephFS setup - that this is not true for CephFS, which >> does not support images and rootdir. >> >> What is the reason, that of all filesystens, CephFS doesn't support the >> main PVE content type? :-) >> >> Ceph RPD on the other hand, doesn't support backup. >> >> This seems to rule our Ceph (in whatever variant) as an unifying, shared >> storage for PVE. Or do I miss an important point here? >> >> Frank >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From f.thommen at dkfz-heidelberg.de Sun Feb 16 21:55:41 2020 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Sun, 16 Feb 2020 21:55:41 +0100 Subject: [PVE-User] Why are images and rootdir not supported on CephFS? In-Reply-To: <38c91d67-cfdf-7da3-f0f0-cd88dc95559d@dkfz-heidelberg.de> References: <38c91d67-cfdf-7da3-f0f0-cd88dc95559d@dkfz-heidelberg.de> Message-ID: <9b600c21-e922-b917-1435-734c38f0ed67@dkfz-heidelberg.de> I have now configured a Directory storage which points to the CephFS mountpoint. When creating a container (Alpine Linux, 10 GB disk, 2 GB Memory), this happens in the blink of an eye when using Ceph RBD or local storage as root disk, but it takes very, very long (10 to 20 times longer) when using the CephFS-directory as root disk. On 16/02/2020 17:26, Frank Thommen wrote: > Thank you for the link. > > Even though Fabian Gruenbichler writes in the bugreport > (https://bugzilla.proxmox.com/show_bug.cgi?id=2490#c2) that Ceph RBD > offers all features of CephFS, this doesn't seem to be true(?), as > CephFS supports "vztmpl iso backup snippets" and Ceph RBD "images > rootdir" (https://pve.proxmox.com/pve-docs/chapter-pvesm.html), so these > two storage types are complementary and RBD cannot replace CephFS. > > What would be a good practice (CephFS and RBD are already set up): > Create an RBD storage on the same (PVE based) Ceph storage that already > has CephFS on top of it and use one for templates and backups and the > other for images and rootdir? > > Won't it create problems when using the same Ceph pool with CephFS /and/ > RBD (this is probably rather a Ceph question, though) > > Additionally this might create problems with our inhouse tape backup, as > I don't think it supports backing up object storage... > > frank > > > On 15/02/2020 00:38, Gianni Milo wrote: >> This has been discussed in the past, see the post below for some >> answers... >> >> https://www.mail-archive.com/pve-user at pve.proxmox.com/msg10160.html >> >> >> >> >> On Fri, 14 Feb 2020 at 22:57, Frank Thommen >> >> wrote: >> >>> Dear all, >>> >>> the PVE documentation on >>> https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_types says, >>> that "[File level based storage technologies] allow you to store content >>> of any type".? However I found now - after having combined all available >>> disks in a big CephFS setup - that this is not true for CephFS, which >>> does not support images and rootdir. >>> >>> What is the reason, that of all filesystens, CephFS doesn't support the >>> main PVE content type? :-) >>> >>> Ceph RPD on the other hand, doesn't support backup. >>> >>> This seems to rule our Ceph (in whatever variant) as an unifying, shared >>> storage for PVE.? Or do I miss an important point here? >>> >>> Frank >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From f.gruenbichler at proxmox.com Mon Feb 17 07:23:25 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Mon, 17 Feb 2020 07:23:25 +0100 Subject: [PVE-User] Why are images and rootdir not supported on CephFS? In-Reply-To: <38c91d67-cfdf-7da3-f0f0-cd88dc95559d@dkfz-heidelberg.de> References: <38c91d67-cfdf-7da3-f0f0-cd88dc95559d@dkfz-heidelberg.de> Message-ID: <1581920353.wcq10qohfr.astroid@nora.none> On February 16, 2020 5:26 pm, Frank Thommen wrote: > Thank you for the link. > > Even though Fabian Gruenbichler writes in the bugreport > (https://bugzilla.proxmox.com/show_bug.cgi?id=2490#c2) that Ceph RBD > offers all features of CephFS, this doesn't seem to be true(?), as > CephFS supports "vztmpl iso backup snippets" and Ceph RBD "images > rootdir" (https://pve.proxmox.com/pve-docs/chapter-pvesm.html), so these > two storage types are complementary and RBD cannot replace CephFS. that comment was about using CephFS for guest image/volume storage. for that use case, RBD has all the same features (even more), but with better performance. obviously I didn't mean that RBD is a file system ;) > What would be a good practice (CephFS and RBD are already set up): > Create an RBD storage on the same (PVE based) Ceph storage that already > has CephFS on top of it and use one for templates and backups and the > other for images and rootdir? yes > Won't it create problems when using the same Ceph pool with CephFS /and/ > RBD (this is probably rather a Ceph question, though) you don't use the same Ceph pool, just the same OSDs (pools are logical in Ceph, unlike with ZFS), so this is not a problem. > Additionally this might create problems with our inhouse tape backup, as > I don't think it supports backing up object storage... the usual backup options are available - use vzdump, and then backup the VMA files. or use some backup solution inside the guest. or both ;) From info at aminvakil.com Mon Feb 17 08:30:22 2020 From: info at aminvakil.com (Amin Vakil) Date: Mon, 17 Feb 2020 11:00:22 +0330 Subject: [PVE-User] Broken link in Ceph Wiki Message-ID: This link is broken and gives 404 not found. http://ceph.com/papers/weil-thesis.pdf I think this is the new and working link: https://ceph.com/wp-content/uploads/2016/08/weil-thesis.pdf -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From t.lamprecht at proxmox.com Mon Feb 17 08:39:44 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Mon, 17 Feb 2020 08:39:44 +0100 Subject: [PVE-User] Broken link in Ceph Wiki In-Reply-To: References: Message-ID: <278bda81-aa07-361a-4a81-346a7d58825e@proxmox.com> Hi, On 2/17/20 8:30 AM, Amin Vakil wrote: > This link is broken and gives 404 not found. > > http://ceph.com/papers/weil-thesis.pdf > > I think this is the new and working link: > > https://ceph.com/wp-content/uploads/2016/08/weil-thesis.pdf Yes seems right, thanks for telling us. Can you please also link to the page which contains the outdated link - I did not find it immediately in our wiki, we have multiple pages regarding ceph. That'd be great! cheers, Thomas From a.lauterer at proxmox.com Mon Feb 17 09:26:48 2020 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Mon, 17 Feb 2020 09:26:48 +0100 Subject: [PVE-User] offline VM migration node1->node2 with local storage In-Reply-To: <04162270-7d68-cdfd-fa4b-e1adc09c85ed@web.de> References: <04162270-7d68-cdfd-fa4b-e1adc09c85ed@web.de> Message-ID: On 2/16/20 11:28 AM, Roland @web.de wrote: > hello, > > why do i need to have the same local storage name when migrating a vm > from node1 to node2 in dual-node cluster with local disks ? > > i'm curious that migration is possible in online state (which is much > more complex/challenging task) without a problem, but offline i get > "storage is not available on selected target" (because there are > differenz zfs pools on both machines) This is because offline and online migration use two very different mechanism. AFAIK Qemu NBD is used for online migration and ZFS send->recv is used for offline migration. > > i guess there is no real technical hurdle, it just needs to get > implemented appropriatley !? There is a patch in the works to make different target storages possible for offline migration. > > regards > roland > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From a.lauterer at proxmox.com Mon Feb 17 09:37:48 2020 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Mon, 17 Feb 2020 09:37:48 +0100 Subject: [PVE-User] offline VM migration node1->node2 with local storage In-Reply-To: <7f5fe767-487c-a2c4-ba19-b3f91d544f49@web.de> References: <04162270-7d68-cdfd-fa4b-e1adc09c85ed@web.de> <7f5fe767-487c-a2c4-ba19-b3f91d544f49@web.de> Message-ID: <12a71e94-3b9c-ac50-8672-a1bc7c7fb5e0@proxmox.com> On 2/16/20 11:53 AM, Roland @web.de wrote: > for my curiousity, in a cluster i can have "rpool" on both nodes (both > are local to each node, i.e. are different pools but with same name) and > i can blazingly fast offline migrate VMs from rpool on node1 to rpool on > node2 (as it's done via zfs snapshot). > > but i cannot add a second pool "hddpool" on different disks on both > nodes with gui, as i get error "storage ID 'hddpool' already defined > (500)" if i want to add 2nd zfs pool to node2 with the same name. > > what i can do is adding pool via commandline on node2 and then remove > "nodes node1" from strorage.cfg (manual edit) to make it analogous to > rpool - i can use hddpool like rpool after that. > > i wonder if this is the proper "way to go"!? There is a "Add Storage" checkbox below the name field when you create a new zpool via the GUI. After you created the pool on the first node, you have to uncheck it because the storage configuration is shared among all cluster nodes and already exists. > > anybody running proxmox cluster without shared storage as larger scale? > > what i try is developing a concept for running a maintainable vm > infrastructure without need for HA shared storage&networking (as we have > vm live migration including disks and on server defect, i simply could > pull all disks out and put into a spare machine of same type) Have you taken a look at the built in Replication to have a copy of a VM on another node? How many nodes do you want to have in your cluster? To be honest, ZFS and replication is a nice setup for a smaller 2 node cluster (with QDevice for quorum). Once you have more nodes it gets complicated quite quickly. A shared storage does not only give you HA, should you need it, but simplify the administration of the cluster. If you want to avoid an expensive storage appliance that supports HA, have a look at the Ceph integration of Proxmox VE. > > regards > Roland > > From devzero at web.de Mon Feb 17 10:01:12 2020 From: devzero at web.de (Roland @web.de) Date: Mon, 17 Feb 2020 10:01:12 +0100 Subject: [PVE-User] offline VM migration node1->node2 with local storage In-Reply-To: <12a71e94-3b9c-ac50-8672-a1bc7c7fb5e0@proxmox.com> References: <04162270-7d68-cdfd-fa4b-e1adc09c85ed@web.de> <7f5fe767-487c-a2c4-ba19-b3f91d544f49@web.de> <12a71e94-3b9c-ac50-8672-a1bc7c7fb5e0@proxmox.com> Message-ID: > There is a "Add Storage" checkbox below the name field when you create > a new zpool via the GUI. > > After you created the pool on the first node, you have to uncheck it > because the storage configuration is shared among all cluster nodes > and already exists. ahh, thanks, will check. >> >> anybody running proxmox cluster without shared storage as larger scale? >> >> what i try is developing a concept for running a maintainable vm >> infrastructure without need for HA shared storage&networking (as we have >> vm live migration including disks and on server defect, i simply could >> pull all disks out and put into a spare machine of same type) > > Have you taken a look at the built in Replication to have a copy of a > VM on another node? yes, i'm aware of that feature. > > How many nodes do you want to have in your cluster? To be honest, ZFS > and replication is a nice setup for a smaller 2 node cluster (with > QDevice for quorum). Once you have more nodes it gets complicated > quite quickly. A shared storage does not only give you HA, should you > need it, but simplify the administration of the cluster. > > If you want to avoid an expensive storage appliance that supports HA, > have a look at the Ceph integration of Proxmox VE. yes, but you do not only need HA for your storage appliance, you also need to make the network or san in between highly available , too.? every storage update is something which makes you go very nervous... ceph is too complex beast for us, we like simplicity. From devzero at web.de Mon Feb 17 10:01:56 2020 From: devzero at web.de (Roland @web.de) Date: Mon, 17 Feb 2020 10:01:56 +0100 Subject: [PVE-User] offline VM migration node1->node2 with local storage In-Reply-To: References: <04162270-7d68-cdfd-fa4b-e1adc09c85ed@web.de> Message-ID: <5ba6b632-8289-6641-5f2a-0333807db9c4@web.de> >> >> i guess there is no real technical hurdle, it just needs to get >> implemented appropriatley !? > > There is a patch in the works to make different target storages > possible for offline migration. fantastic, thanks! :) >> >> regards >> roland >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > From humbertos at ifsc.edu.br Mon Feb 17 14:11:39 2020 From: humbertos at ifsc.edu.br (Humberto Jose De Sousa) Date: Mon, 17 Feb 2020 10:11:39 -0300 (BRT) Subject: [PVE-User] Why are images and rootdir not supported on CephFS? In-Reply-To: <9b600c21-e922-b917-1435-734c38f0ed67@dkfz-heidelberg.de> References: <38c91d67-cfdf-7da3-f0f0-cd88dc95559d@dkfz-heidelberg.de> <9b600c21-e922-b917-1435-734c38f0ed67@dkfz-heidelberg.de> Message-ID: <1375069464.192428.1581945099953.JavaMail.zimbra@ifsc.edu.br> We have a deploy with RBD and cephfs. We are doing backup in a NFS storage shared with all hosts of cluster. One host act how NFS server and the others are clients. It's working well. Humberto. De: "Frank Thommen" Para: pve-user at pve.proxmox.com Enviadas: Domingo, 16 de fevereiro de 2020 17:55:41 Assunto: Re: [PVE-User] Why are images and rootdir not supported on CephFS? I have now configured a Directory storage which points to the CephFS mountpoint. When creating a container (Alpine Linux, 10 GB disk, 2 GB Memory), this happens in the blink of an eye when using Ceph RBD or local storage as root disk, but it takes very, very long (10 to 20 times longer) when using the CephFS-directory as root disk. On 16/02/2020 17:26, Frank Thommen wrote: > Thank you for the link. > > Even though Fabian Gruenbichler writes in the bugreport > (https://bugzilla.proxmox.com/show_bug.cgi?id=2490#c2) that Ceph RBD > offers all features of CephFS, this doesn't seem to be true(?), as > CephFS supports "vztmpl iso backup snippets" and Ceph RBD "images > rootdir" (https://pve.proxmox.com/pve-docs/chapter-pvesm.html), so these > two storage types are complementary and RBD cannot replace CephFS. > > What would be a good practice (CephFS and RBD are already set up): > Create an RBD storage on the same (PVE based) Ceph storage that already > has CephFS on top of it and use one for templates and backups and the > other for images and rootdir? > > Won't it create problems when using the same Ceph pool with CephFS /and/ > RBD (this is probably rather a Ceph question, though) > > Additionally this might create problems with our inhouse tape backup, as > I don't think it supports backing up object storage... > > frank > > > On 15/02/2020 00:38, Gianni Milo wrote: >> This has been discussed in the past, see the post below for some >> answers... >> >> https://www.mail-archive.com/pve-user at pve.proxmox.com/msg10160.html >> >> >> >> >> On Fri, 14 Feb 2020 at 22:57, Frank Thommen >> >> wrote: >> >>> Dear all, >>> >>> the PVE documentation on >>> https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_types says, >>> that "[File level based storage technologies] allow you to store content >>> of any type". However I found now - after having combined all available >>> disks in a big CephFS setup - that this is not true for CephFS, which >>> does not support images and rootdir. >>> >>> What is the reason, that of all filesystens, CephFS doesn't support the >>> main PVE content type? :-) >>> >>> Ceph RPD on the other hand, doesn't support backup. >>> >>> This seems to rule our Ceph (in whatever variant) as an unifying, shared >>> storage for PVE. Or do I miss an important point here? >>> >>> Frank >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From damkobaranov at gmail.com Mon Feb 17 14:21:51 2020 From: damkobaranov at gmail.com (Demetri A. Mkobaranov) Date: Mon, 17 Feb 2020 14:21:51 +0100 Subject: [PVE-User] Newbie in cluster - architecture questions Message-ID: <3109d488-bb65-0c7a-0a7e-7334cbb25e71@gmail.com> Hello, I'm approaching clustering technology with the final aim to learn HA architectures. My preparation on this is zero. My biggest difficulty at the moment is to understand which technology is for and what problem solves. I'm reading Proxmox documentation and it seems to imply this as background knowledge but I don't have it. Googling is very hard in this realm due to the vastness of the topic and the number of technologies "replaced" over the time or competing at the current time so I'd be grateful is someone can back me a up a little in this learning process. Questions: 1) From the Proxmox manual it seems like a cluster, without HA, offers just the ability to migrate a guest from a node to another one. Is this correct? 1B) Basically the configuration of one node is replicated (like /etc/pve via pmxcfs) on other nodes so that they are all aware of the guests and then corosync makes each node aware of the status of any other node and probably triggers the synchronization via pmxcfs. Right? Nothing more (unless further configuration) ? 2) Can I have nodes belonging to one cluster but living in different countries? Or in this case a multi-cluster is required (like 3 nodes in one datacenter and a cluster in another datacenter somehow linked together) ? Thank you for your time and efforts Demetri From t.lamprecht at proxmox.com Mon Feb 17 16:37:51 2020 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Mon, 17 Feb 2020 16:37:51 +0100 Subject: [PVE-User] Newbie in cluster - architecture questions In-Reply-To: <3109d488-bb65-0c7a-0a7e-7334cbb25e71@gmail.com> References: <3109d488-bb65-0c7a-0a7e-7334cbb25e71@gmail.com> Message-ID: Hi, On 2/17/20 2:21 PM, Demetri A. Mkobaranov wrote: > 1) From the Proxmox manual it seems like a cluster, without HA, offers just the ability to migrate a guest from a node to another one. Is this correct? That plus some other things: * you can manage all nodes through connecting to any node (multi master system) * replicate ZFS backed VMs easily between nodes * define backup jobs for VMs or resource pools and they'll run independently where the VM currently is. > > 1B) Basically the configuration of one node is replicated (like /etc/pve via pmxcfs) on other nodes so that they are all aware of the guests and then corosync makes each node aware of the status of any other node and probably triggers the synchronization via pmxcfs. Right? Nothing more (unless further configuration) ? Yeah, basically yes. A Proxmox VE base design is that each VM/CT belongs to a node, so other nodes must not touch them, but only redirect API calls and what not to the owning node. And yes, all VM/CT, storage, firewall and some other configuration is on /etc/pve and that is a realtime shared configuration file system. Any change to any file will get replicated to all nodes in a reliable, virtual synchronous, way. > > 2) Can I have nodes belonging to one cluster but living in different countries? Or in this case a multi-cluster is required (like 3 nodes in one datacenter and a cluster in another datacenter somehow linked together) ? > Theoretically yes, practically not so. Clustering has some assertions on timing, so you need LAN like latencies between those nodes. It can work with <= 10 milliseconds but ideal are <= 2 milliseconds round trip times. Linking clusters is planned and some work is going on (the API Token series was preparatory work for linking clusters) but it is not yet possible. hope that clears things up a bit. cheers, Thomas From damkobaranov at gmail.com Mon Feb 17 18:33:52 2020 From: damkobaranov at gmail.com (Demetri A. Mkobaranov) Date: Mon, 17 Feb 2020 18:33:52 +0100 Subject: [PVE-User] Newbie in cluster - architecture questions In-Reply-To: References: <3109d488-bb65-0c7a-0a7e-7334cbb25e71@gmail.com> Message-ID: Thanks a lot for your crystal clear explanation 1) From the Proxmox manual it seems like a cluster, without HA, offers just the ability to migrate a guest from a node to another one. Is this correct? > That plus some other things: > * you can manage all nodes through connecting to any node (multi master system) > * replicate ZFS backed VMs easily between nodes interesting. I need to study more this part. So ZFS instead of LVM for the nodes? > * define backup jobs for VMs or resource pools and they'll run independently where the VM currently is. do you know what's the most common solution in the industry in this case for storing the backups? an NFS running outside of the cluster? Or more specifically, say I have a 3 nodes cluster all in the same datacenter, would it make any sense to store the backups in a shared storage inside the cluster? > hope that clears things up a bit. Absolutely! I'm grateful Demetri From f.thommen at dkfz-heidelberg.de Mon Feb 17 21:42:11 2020 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Mon, 17 Feb 2020 21:42:11 +0100 Subject: [PVE-User] Why are images and rootdir not supported on CephFS? In-Reply-To: <1581920353.wcq10qohfr.astroid@nora.none> References: <38c91d67-cfdf-7da3-f0f0-cd88dc95559d@dkfz-heidelberg.de> <1581920353.wcq10qohfr.astroid@nora.none> Message-ID: <26b28bd5-5208-a81c-8d0f-fa8cbc11d02f@dkfz-heidelberg.de> On 2/17/20 7:23 AM, Fabian Gr?nbichler wrote: > On February 16, 2020 5:26 pm, Frank Thommen wrote: >> Thank you for the link. >> >> Even though Fabian Gruenbichler writes in the bugreport >> (https://bugzilla.proxmox.com/show_bug.cgi?id=2490#c2) that Ceph RBD >> offers all features of CephFS, this doesn't seem to be true(?), as >> CephFS supports "vztmpl iso backup snippets" and Ceph RBD "images >> rootdir" (https://pve.proxmox.com/pve-docs/chapter-pvesm.html), so these >> two storage types are complementary and RBD cannot replace CephFS. > > that comment was about using CephFS for guest image/volume storage. for > that use case, RBD has all the same features (even more), but with > better performance. obviously I didn't mean that RBD is a file system ;) > >> What would be a good practice (CephFS and RBD are already set up): >> Create an RBD storage on the same (PVE based) Ceph storage that already >> has CephFS on top of it and use one for templates and backups and the >> other for images and rootdir? > > yes > >> Won't it create problems when using the same Ceph pool with CephFS /and/ >> RBD (this is probably rather a Ceph question, though) > > you don't use the same Ceph pool, just the same OSDs (pools are logical > in Ceph, unlike with ZFS), so this is not a problem. > >> Additionally this might create problems with our inhouse tape backup, as >> I don't think it supports backing up object storage... > > the usual backup options are available - use vzdump, and then backup the > VMA files. or use some backup solution inside the guest. or both ;) Thank you for all the hints above. I will go along these lines. Still struggling with some Ceph concepts ;-) frank From gaio at sv.lnf.it Tue Feb 18 16:44:26 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Tue, 18 Feb 2020 16:44:26 +0100 Subject: [PVE-User] Debian buster, systemd, container and nesting=1 Message-ID: <20200218154426.GG3479@sv.lnf.it> I'm still on PVE 5.4. I've upgraded a (privileged) LXC container to debian buster, that was originally installed as debian jessie, then upgraded to stretch, but still without systemd. Upgrading to buster trigger systemd installation. After installation, most of the services, not all, does not start, eg apache: root at vnc:~# systemctl status apache2.service ? apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2020-02-18 16:06:35 CET; 44s ago Docs: https://httpd.apache.org/docs/2.4/ Process: 120 ExecStart=/usr/sbin/apachectl start (code=exited, status=226/NAMESPACE) feb 18 16:06:35 vnc systemd[1]: Starting The Apache HTTP Server... feb 18 16:06:35 vnc systemd[120]: apache2.service: Failed to set up mount namespacing: Permission denied feb 18 16:06:35 vnc systemd[120]: apache2.service: Failed at step NAMESPACE spawning /usr/sbin/apachectl: Permission denied feb 18 16:06:35 vnc systemd[1]: apache2.service: Control process exited, code=exited, status=226/NAMESPACE feb 18 16:06:35 vnc systemd[1]: apache2.service: Failed with result 'exit-code'. feb 18 16:06:35 vnc systemd[1]: Failed to start The Apache HTTP Server. google say me to add 'nesting=1' to 'features', that works, but looking at: https://pve.proxmox.com/wiki/Linux_Container i read: nesting= (default = 0) Allow nesting. Best used with unprivileged containers with additional id mapping. Note that this will expose procfs and sysfs contents of the host to the guest. i can convert this container to an unprivileged ones, but other no, for examples some containers are samba domain controller, that need a privileged container. There's another/better way to make systemd work on containers? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From gaio at sv.lnf.it Wed Feb 19 09:55:37 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 19 Feb 2020 09:55:37 +0100 Subject: [PVE-User] PVE 6: postfix in a debian buster container, 'satellite' does not work. Message-ID: <20200219085537.GB6251@sv.lnf.it> I'm not sue this is a debian/posfix or a consequences of packaging it in a container, so i try to ask here. I've setup a new container 'debian buster', and configure inside postfix a 'satellite system' to send all email to my internal SMTP server. A local inject work: echo prova | mail -s test root Feb 17 16:35:29 vbaculalpb postfix/pickup[4047]: 318E18A4A: uid=0 from= Feb 17 16:35:29 vbaculalpb postfix/cleanup[4104]: 318E18A4A: message-id=<20200217153529.318E18A4A at vbaculalpb.localdomain> Feb 17 16:35:29 vbaculalpb postfix/qmgr[4048]: 318E18A4A: from=, size=408, nrcpt=1 (queue active) Feb 17 16:35:29 vbaculalpb postfix/smtp[4106]: 318E18A4A: to=, orig_to=, relay=mail.lilliput.linux.it[192.168.1.1]:25, delay=0.21, delays=0.02/0.01/0.01/0.16, dsn=2.0.0, status=sent (250 OK id=1j3iQj-0003Yc-7t) Feb 17 16:35:29 vbaculalpb postfix/qmgr[4048]: 318E18A4A: removed bu some services, that use 'localhost' as SMTP server, no: Feb 17 16:32:18 vbaculalpb postfix/master[383]: warning: process /usr/lib/postfix/sbin/smtpd pid 3687 exit status 1 Feb 17 16:32:18 vbaculalpb postfix/master[383]: warning: /usr/lib/postfix/sbin/smtpd: bad command startup -- throttling Feb 17 16:33:18 vbaculalpb postfix/smtpd[3688]: fatal: in parameter smtpd_relay_restrictions or smtpd_recipient_restrictions, specify at least one working instance of: reject_unauth_destination, defer_unauth_destination, reject, defer, defer_if_permit or check_relay_domains googling a bit lead me to: postconf compatibility_level=2 postfix reload and after that 'satellite system' work as usual. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From gaio at sv.lnf.it Wed Feb 19 11:39:06 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 19 Feb 2020 11:39:06 +0100 Subject: [PVE-User] How to restart ceph-mon? Message-ID: <20200219103906.GC6251@sv.lnf.it> I've upgraded ceph, PVE5, minor upgrade from 12.2.12 to 12.2.13. OSD nodes get rebooted, but i have also two nodes that are only monitors, and host some VM/LXC so i've tried to simply restart ceph-mon. But seems isineffective: root at thor:~# ps aux | grep ceph-[m]on ceph 2469 0.7 0.1 539852 67720 ? Ssl 2019 917:55 /usr/bin/ceph-mon -i 3 --pid-file /var/run/ceph/mon.3.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph root at thor:~# systemctl restart ceph-mon at 3.service root at thor:~# ps aux | grep ceph-[m]on ceph 2469 0.7 0.1 539852 67720 ? Ssl 2019 917:55 /usr/bin/ceph-mon -i 3 --pid-file /var/run/ceph/mon.3.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph I've tried to see in pve wiki if there's some know procedure to do ceph 'minor upgrades' but found nothing. Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From a.antreich at proxmox.com Wed Feb 19 11:55:03 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Wed, 19 Feb 2020 11:55:03 +0100 Subject: [PVE-User] How to restart ceph-mon? In-Reply-To: <20200219103906.GC6251@sv.lnf.it> References: <20200219103906.GC6251@sv.lnf.it> Message-ID: <20200219105503.GC2117767@dona.proxmox.com> Hello Marco, On Wed, Feb 19, 2020 at 11:39:06AM +0100, Marco Gaiarin wrote: > > I've upgraded ceph, PVE5, minor upgrade from 12.2.12 to 12.2.13. > > OSD nodes get rebooted, but i have also two nodes that are only > monitors, and host some VM/LXC so i've tried to simply restart > ceph-mon. But seems isineffective: > > root at thor:~# ps aux | grep ceph-[m]on > ceph 2469 0.7 0.1 539852 67720 ? Ssl 2019 917:55 /usr/bin/ceph-mon -i 3 --pid-file /var/run/ceph/mon.3.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph > > root at thor:~# systemctl restart ceph-mon at 3.service > > root at thor:~# ps aux | grep ceph-[m]on > ceph 2469 0.7 0.1 539852 67720 ? Ssl 2019 917:55 /usr/bin/ceph-mon -i 3 --pid-file /var/run/ceph/mon.3.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph > What does the status of the service show? systemctl status ceph-mon at 3.service To add, the numeric ID for MONs is an old concept and already depricated for some time now. Best recreate them, the default uses already the hostname for its ID. From gaio at sv.lnf.it Wed Feb 19 12:05:44 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 19 Feb 2020 12:05:44 +0100 Subject: [PVE-User] How to restart ceph-mon? In-Reply-To: <20200219105503.GC2117767@dona.proxmox.com> References: <20200219103906.GC6251@sv.lnf.it> <20200219105503.GC2117767@dona.proxmox.com> Message-ID: <20200219110544.GD6251@sv.lnf.it> Mandi! Alwin Antreich In chel di` si favelave... > What does the status of the service show? > systemctl status ceph-mon at 3.service Uh, never minded about that, damn me! root at thor:~# systemctl status ceph-mon at 3.service ? ceph-mon at 3.service - Ceph cluster monitor daemon Loaded: loaded (/lib/systemd/system/ceph-mon at .service; disabled; vendor preset: enabled) Drop-In: /lib/systemd/system/ceph-mon at .service.d ??ceph-after-pve-cluster.conf Active: failed (Result: exit-code) since Wed 2020-02-19 11:31:47 CET; 29min ago Process: 3434884 ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id 3 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE) Main PID: 3434884 (code=exited, status=1/FAILURE) Feb 19 11:31:37 thor systemd[1]: ceph-mon at 3.service: Failed with result 'exit-code'. Feb 19 11:31:47 thor systemd[1]: ceph-mon at 3.service: Service hold-off time over, scheduling restart. Feb 19 11:31:47 thor systemd[1]: Stopped Ceph cluster monitor daemon. Feb 19 11:31:47 thor systemd[1]: ceph-mon at 3.service: Start request repeated too quickly. Feb 19 11:31:47 thor systemd[1]: Failed to start Ceph cluster monitor daemon. Feb 19 11:31:47 thor systemd[1]: ceph-mon at 3.service: Unit entered failed state. Feb 19 11:31:47 thor systemd[1]: ceph-mon at 3.service: Failed with result 'exit-code'. I've tried: systemctl stop ceph-mon at 3.service but old daemon is still alive: root at thor:~# systemctl stop ceph-mon at 3.service root at thor:~# ps aux | grep ceph-[m]on ceph 2469 0.7 0.1 539704 67408 ? Ssl 2019 918:08 /usr/bin/ceph-mon -i 3 --pid-file /var/run/ceph/mon.3.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph it is time to kill it? > To add, the numeric ID for MONs is an old concept and already depricated > for some time now. Best recreate them, the default uses already the > hostname for its ID. OK, thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From smr at kmi.com Wed Feb 19 12:55:33 2020 From: smr at kmi.com (Stefan M. Radman) Date: Wed, 19 Feb 2020 11:55:33 +0000 Subject: pvelocalhost Message-ID: What is the pvelocalhost alias in /etc/hosts good for? Is it still used in PVE6? Is it mandatory? If mandatory, can I use the loopback address as seen below? root at proxmox:~# head -1 /etc/hosts 127.0.0.1 localhost.localdomain localhost pvelocalhost Thanks Stefan CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From krienke at uni-koblenz.de Wed Feb 19 13:05:00 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Wed, 19 Feb 2020 13:05:00 +0100 Subject: [PVE-User] upgrade path to proxmox enterprise repos ? Message-ID: Hello, At the moment I run a proxmox cluster with a seperate ceph cluster as storage backend. I do not have a proxmox subscription yet. Instead I use the community repository to do some upgrades to proxmox. I know that this repos is not thought for a productive environment. My question is if I now start using proxmox deploying productive VMs, is there a problem upgrading the proxmox hosts later on with packages from the enterprise repos? I would not expect any problems, but perhaps someone has already done this and did/did not experience some kind of trouble? Thanks for your help Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312 Web: http://userpages.uni-koblenz.de/~krienke PGP: http://userpages.uni-koblenz.de/~krienke/mypgp.html From elacunza at binovo.es Wed Feb 19 13:07:43 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 19 Feb 2020 13:07:43 +0100 Subject: [PVE-User] upgrade path to proxmox enterprise repos ? In-Reply-To: References: Message-ID: <4508a47d-eda6-cef2-2f90-b6c6bd764181@binovo.es> Hi Rainer, You can switch from community repo to enterprise repo withou any issue, just change sources.list . Cheers Eneko El 19/2/20 a las 13:05, Rainer Krienke escribi?: > Hello, > > At the moment I run a proxmox cluster with a seperate ceph cluster as > storage backend. I do not have a proxmox subscription yet. Instead I use > the community repository to do some upgrades to proxmox. I know that > this repos is not thought for a productive environment. > > My question is if I now start using proxmox deploying productive VMs, is > there a problem upgrading the proxmox hosts later on with packages from > the enterprise repos? > > I would not expect any problems, but perhaps someone has already done > this and did/did not experience some kind of trouble? > > Thanks for your help > Rainer -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From m.limbeck at proxmox.com Wed Feb 19 14:30:35 2020 From: m.limbeck at proxmox.com (Mira Limbeck) Date: Wed, 19 Feb 2020 14:30:35 +0100 Subject: [PVE-User] pvelocalhost In-Reply-To: References: Message-ID: <7e20bebb-98ce-b15c-2191-e72b1bde8499@proxmox.com> It is neither used nor part of newer installations (5.3+ I think?). You can remove it. On 2/19/20 12:55 PM, Stefan M. Radman via pve-user wrote: > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gilberto.nunes32 at gmail.com Wed Feb 19 15:01:55 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 19 Feb 2020 11:01:55 -0300 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: <0eaec1f1-f6e9-1c58-2595-dced38bf9932@binovo.es> <754b8bc0-419b-d9e5-0c11-400b25e1d916@lightspeed.ca> Message-ID: HI there I change the bwlimit to 100000 inside /etc/vzdump and vzdump works normally for a couple of days and it's make happy. Now, I have the error again! No logs, no explanation! Just error pure and simple: 110: 2020-02-18 22:18:06 INFO: Starting Backup of VM 110 (qemu) 110: 2020-02-18 22:18:06 INFO: status = running 110: 2020-02-18 22:18:07 INFO: update VM 110: -lock backup 110: 2020-02-18 22:18:07 INFO: VM Name: cliente-V-110-IP-163 110: 2020-02-18 22:18:07 INFO: include disk 'scsi0' 'local-lvm:vm-110-disk-0' 100G 110: 2020-02-18 22:18:57 ERROR: Backup of VM 110 failed - no such volume 'local-lvm:vm-110-disk-0' 112: 2020-02-18 22:19:00 INFO: Starting Backup of VM 112 (qemu) 112: 2020-02-18 22:19:00 INFO: status = running 112: 2020-02-18 22:19:01 INFO: update VM 112: -lock backup 112: 2020-02-18 22:19:01 INFO: VM Name: cliente-V-112-IP-165 112: 2020-02-18 22:19:01 INFO: include disk 'scsi0' 'local-lvm:vm-112-disk-0' 120G 112: 2020-02-18 22:19:31 ERROR: Backup of VM 112 failed - no such volume 'local-lvm:vm-112-disk-0' 116: 2020-02-18 22:19:31 INFO: Starting Backup of VM 116 (qemu) 116: 2020-02-18 22:19:31 INFO: status = running 116: 2020-02-18 22:19:32 INFO: update VM 116: -lock backup 116: 2020-02-18 22:19:32 INFO: VM Name: cliente-V-IP-162 116: 2020-02-18 22:19:32 INFO: include disk 'scsi0' 'local-lvm:vm-116-disk-0' 100G 116: 2020-02-18 22:20:05 ERROR: Backup of VM 116 failed - no such volume 'local-lvm:vm-116-disk-0' --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em sex., 14 de fev. de 2020 ?s 14:22, Gianni Milo escreveu: > If it's happening randomly, my best guess would be that it might be related > to high i/o during the time frame that the backup takes place. > Have you tried creating multiple backup schedules which will take place at > different times ? Setting backup bandwidth limits might also help. > Check the PVE administration guide for more details on this. You could > check for any clues in syslog during the time that the failed backup takes > place as well. > > G. > > On Fri, 14 Feb 2020 at 14:35, Gilberto Nunes > wrote: > > > HI guys > > > > Some problem but with two different vms... > > I also update Proxmox still in 5.x series, but no changes... Now this > > problem ocurrs twice, one night after other... > > I am very concerned about it! > > Please, Proxmox staff, is there something I can do to solve this issue? > > Anybody alread do a bugzilla??? > > > > Thanks > > --- > > Gilberto Nunes Ferreira > > > > (47) 3025-5907 > > (47) 99676-7530 - Whatsapp / Telegram > > > > Skype: gilberto.nunes36 > > > > > > > > > > > > Em qui., 13 de fev. de 2020 ?s 19:53, Atila Vasconcelos < > > atilav at lightspeed.ca> escreveu: > > > > > Hi, > > > > > > I had the same problem in the past and it repeats once a while.... its > > > very random; I could not find any way to reproduce it. > > > > > > But as it happens... it will go away. > > > > > > When you are almost forgetting about it, it will come again ;) > > > > > > I just learned to ignore it (and do manually the backup when it fails) > > > > > > I see in proxmox 6.x it is less frequent (but still happening once a > > > while). > > > > > > > > > ABV > > > > > > > > > On 2020-02-13 4:42 a.m., Gilberto Nunes wrote: > > > > Yeah! Me too... This problem is pretty random... Let see next week! > > > > --- > > > > Gilberto Nunes Ferreira > > > > > > > > (47) 3025-5907 > > > > (47) 99676-7530 - Whatsapp / Telegram > > > > > > > > Skype: gilberto.nunes36 > > > > > > > > > > > > > > > > > > > > > > > > Em qui., 13 de fev. de 2020 ?s 09:29, Eneko Lacunza < > > elacunza at binovo.es> > > > > escreveu: > > > > > > > >> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of > > ideas > > > >> now, sorry!!! ;) > > > >> > > > >> El 13/2/20 a las 13:24, Gilberto Nunes escribi?: > > > >>> I can assure you... the disk is there! > > > >>> > > > >>> pvesm list local-lvm > > > >>> local-lvm:vm-101-disk-0 raw 53687091200 101 > > > >>> local-lvm:vm-102-disk-0 raw 536870912000 102 > > > >>> local-lvm:vm-103-disk-0 raw 322122547200 103 > > > >>> local-lvm:vm-104-disk-0 raw 214748364800 104 > > > >>> local-lvm:vm-104-state-LUKPLAS raw 17704157184 104 > > > >>> local-lvm:vm-105-disk-0 raw 751619276800 105 > > > >>> local-lvm:vm-106-disk-0 raw 161061273600 106 > > > >>> local-lvm:vm-107-disk-0 raw 536870912000 107 > > > >>> local-lvm:vm-108-disk-0 raw 214748364800 108 > > > >>> local-lvm:vm-109-disk-0 raw 107374182400 109 > > > >>> local-lvm:vm-110-disk-0 raw 107374182400 110 > > > >>> local-lvm:vm-111-disk-0 raw 107374182400 111 > > > >>> local-lvm:vm-112-disk-0 raw 128849018880 112 > > > >>> local-lvm:vm-113-disk-0 raw 53687091200 113 > > > >>> local-lvm:vm-113-state-antes_balloon raw 17704157184 113 > > > >>> local-lvm:vm-114-disk-0 raw 128849018880 114 > > > >>> local-lvm:vm-115-disk-0 raw 107374182400 115 > > > >>> local-lvm:vm-115-disk-1 raw 53687091200 115 > > > >>> local-lvm:vm-116-disk-0 raw 107374182400 116 > > > >>> local-lvm:vm-117-disk-0 raw 107374182400 117 > > > >>> local-lvm:vm-118-disk-0 raw 107374182400 118 > > > >>> local-lvm:vm-119-disk-0 raw 26843545600 119 > > > >>> local-lvm:vm-121-disk-0 raw 107374182400 121 > > > >>> local-lvm:vm-122-disk-0 raw 107374182400 122 > > > >>> local-lvm:vm-123-disk-0 raw 161061273600 123 > > > >>> local-lvm:vm-124-disk-0 raw 107374182400 124 > > > >>> local-lvm:vm-125-disk-0 raw 53687091200 125 > > > >>> local-lvm:vm-126-disk-0 raw 32212254720 126 > > > >>> local-lvm:vm-127-disk-0 raw 53687091200 127 > > > >>> local-lvm:vm-129-disk-0 raw 21474836480 129 > > > >>> > > > >>> ls -l /dev/pve/vm-110-disk-0 > > > >>> lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> > > > ../dm-15 > > > >>> > > > >>> > > > >>> --- > > > >>> Gilberto Nunes Ferreira > > > >>> > > > >>> (47) 3025-5907 > > > >>> (47) 99676-7530 - Whatsapp / Telegram > > > >>> > > > >>> Skype: gilberto.nunes36 > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> Em qui., 13 de fev. de 2020 ?s 09:19, Eneko Lacunza < > > > elacunza at binovo.es> > > > >>> escreveu: > > > >>> > > > >>>> What about: > > > >>>> > > > >>>> pvesm list local-lvm > > > >>>> ls -l /dev/pve/vm-110-disk-0 > > > >>>> > > > >>>> El 13/2/20 a las 12:40, Gilberto Nunes escribi?: > > > >>>>> Quite strange to say the least > > > >>>>> > > > >>>>> > > > >>>>> ls /dev/pve/* > > > >>>>> /dev/pve/root /dev/pve/vm-109-disk-0 > > > >>>>> /dev/pve/vm-118-disk-0 > > > >>>>> /dev/pve/swap /dev/pve/vm-110-disk-0 > > > >>>>> /dev/pve/vm-119-disk-0 > > > >>>>> /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 > > > >>>>> /dev/pve/vm-121-disk-0 > > > >>>>> /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 > > > >>>>> /dev/pve/vm-122-disk-0 > > > >>>>> /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 > > > >>>>> /dev/pve/vm-123-disk-0 > > > >>>>> /dev/pve/vm-104-disk-0 > /dev/pve/vm-113-state-antes_balloon > > > >>>>> /dev/pve/vm-124-disk-0 > > > >>>>> /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 > > > >>>>> /dev/pve/vm-125-disk-0 > > > >>>>> /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 > > > >>>>> /dev/pve/vm-126-disk-0 > > > >>>>> /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 > > > >>>>> /dev/pve/vm-127-disk-0 > > > >>>>> /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 > > > >>>>> /dev/pve/vm-129-disk-0 > > > >>>>> /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 > > > >>>>> > > > >>>>> ls /dev/mapper/ > > > >>>>> control pve-vm--104--state--LUKPLAS > > > >>>>> pve-vm--115--disk--1 > > > >>>>> iscsi-backup pve-vm--105--disk--0 > > > >>>>> pve-vm--116--disk--0 > > > >>>>> mpatha pve-vm--106--disk--0 > > > >>>>> pve-vm--117--disk--0 > > > >>>>> pve-data pve-vm--107--disk--0 > > > >>>>> pve-vm--118--disk--0 > > > >>>>> pve-data_tdata pve-vm--108--disk--0 > > > >>>>> pve-vm--119--disk--0 > > > >>>>> pve-data_tmeta pve-vm--109--disk--0 > > > >>>>> pve-vm--121--disk--0 > > > >>>>> pve-data-tpool pve-vm--110--disk--0 > > > >>>>> pve-vm--122--disk--0 > > > >>>>> pve-root pve-vm--111--disk--0 > > > >>>>> pve-vm--123--disk--0 > > > >>>>> pve-swap pve-vm--112--disk--0 > > > >>>>> pve-vm--124--disk--0 > > > >>>>> pve-vm--101--disk--0 pve-vm--113--disk--0 > > > >>>>> pve-vm--125--disk--0 > > > >>>>> pve-vm--102--disk--0 pve-vm--113--state--antes_balloon > > > >>>>> pve-vm--126--disk--0 > > > >>>>> pve-vm--103--disk--0 pve-vm--114--disk--0 > > > >>>>> pve-vm--127--disk--0 > > > >>>>> pve-vm--104--disk--0 pve-vm--115--disk--0 > > > >>>>> pve-vm--129--disk--0 > > > >>>>> > > > >>>>> > > > >>>>> --- > > > >>>>> Gilberto Nunes Ferreira > > > >>>>> > > > >>>>> (47) 3025-5907 > > > >>>>> (47) 99676-7530 - Whatsapp / Telegram > > > >>>>> > > > >>>>> Skype: gilberto.nunes36 > > > >>>>> > > > >>>>> > > > >>>>> > > > >>>>> > > > >>>>> > > > >>>>> Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza < > > > >> elacunza at binovo.es> > > > >>>>> escreveu: > > > >>>>> > > > >>>>>> It's quite strange, what about "ls /dev/pve/*"? > > > >>>>>> > > > >>>>>> El 13/2/20 a las 12:18, Gilberto Nunes escribi?: > > > >>>>>>> n: Thu Feb 13 07:06:19 2020 > > > >>>>>>> a2web:~# lvs > > > >>>>>>> LV VG Attr LSize > > > Pool > > > >>>> Origin > > > >>>>>>> Data% Meta% Move Log Cpy%Sync Convert > > > >>>>>>> backup iscsi -wi-ao---- 1.61t > > > >>>>>>> > > > >>>>>>> data pve twi-aotz-- 3.34t > > > >>>>>>> 88.21 9.53 > > > >>>>>>> root pve -wi-ao---- 96.00g > > > >>>>>>> > > > >>>>>>> snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k 200.00g > > > data > > > >>>>>>> vm-104-disk-0 > > > >>>>>>> snap_vm-113-disk-0_antes_balloon pve Vri---tz-k 50.00g > > > data > > > >>>>>>> vm-113-disk-0 > > > >>>>>>> swap pve -wi-ao---- 8.00g > > > >>>>>>> > > > >>>>>>> vm-101-disk-0 pve Vwi-aotz-- 50.00g > > > data > > > >>>>>>> 24.17 > > > >>>>>>> vm-102-disk-0 pve Vwi-aotz-- 500.00g > > > data > > > >>>>>>> 65.65 > > > >>>>>>> vm-103-disk-0 pve Vwi-aotz-- 300.00g > > > data > > > >>>>>>> 37.28 > > > >>>>>>> vm-104-disk-0 pve Vwi-aotz-- 200.00g > > > data > > > >>>>>>> 17.87 > > > >>>>>>> vm-104-state-LUKPLAS pve Vwi-a-tz-- 16.49g > > > data > > > >>>>>>> 35.53 > > > >>>>>>> vm-105-disk-0 pve Vwi-aotz-- 700.00g > > > data > > > >>>>>>> 90.18 > > > >>>>>>> vm-106-disk-0 pve Vwi-aotz-- 150.00g > > > data > > > >>>>>>> 93.55 > > > >>>>>>> vm-107-disk-0 pve Vwi-aotz-- 500.00g > > > data > > > >>>>>>> 98.20 > > > >>>>>>> vm-108-disk-0 pve Vwi-aotz-- 200.00g > > > data > > > >>>>>>> 98.02 > > > >>>>>>> vm-109-disk-0 pve Vwi-aotz-- 100.00g > > > data > > > >>>>>>> 93.68 > > > >>>>>>> vm-110-disk-0 pve Vwi-aotz-- 100.00g > > > data > > > >>>>>>> 34.55 > > > >>>>>>> vm-111-disk-0 pve Vwi-aotz-- 100.00g > > > data > > > >>>>>>> 79.03 > > > >>>>>>> vm-112-disk-0 pve Vwi-aotz-- 120.00g > > > data > > > >>>>>>> 93.78 > > > >>>>>>> vm-113-disk-0 pve Vwi-aotz-- 50.00g > > > data > > > >>>>>>> 65.42 > > > >>>>>>> vm-113-state-antes_balloon pve Vwi-a-tz-- 16.49g > > > data > > > >>>>>>> 43.64 > > > >>>>>>> vm-114-disk-0 pve Vwi-aotz-- 120.00g > > > data > > > >>>>>>> 100.00 > > > >>>>>>> vm-115-disk-0 pve Vwi-a-tz-- 100.00g > > > data > > > >>>>>>> 70.28 > > > >>>>>>> vm-115-disk-1 pve Vwi-a-tz-- 50.00g > > > data > > > >>>>>>> 0.00 > > > >>>>>>> vm-116-disk-0 pve Vwi-aotz-- 100.00g > > > data > > > >>>>>>> 26.34 > > > >>>>>>> vm-117-disk-0 pve Vwi-aotz-- 100.00g > > > data > > > >>>>>>> 100.00 > > > >>>>>>> vm-118-disk-0 pve Vwi-aotz-- 100.00g > > > data > > > >>>>>>> 100.00 > > > >>>>>>> vm-119-disk-0 pve Vwi-aotz-- 25.00g > > > data > > > >>>>>>> 18.42 > > > >>>>>>> vm-121-disk-0 pve Vwi-aotz-- 100.00g > > > data > > > >>>>>>> 23.76 > > > >>>>>>> vm-122-disk-0 pve Vwi-aotz-- 100.00g > > > data > > > >>>>>>> 100.00 > > > >>>>>>> vm-123-disk-0 pve Vwi-aotz-- 150.00g > > > data > > > >>>>>>> 37.89 > > > >>>>>>> vm-124-disk-0 pve Vwi-aotz-- 100.00g > > > data > > > >>>>>>> 30.73 > > > >>>>>>> vm-125-disk-0 pve Vwi-aotz-- 50.00g > > > data > > > >>>>>>> 9.02 > > > >>>>>>> vm-126-disk-0 pve Vwi-aotz-- 30.00g > > > data > > > >>>>>>> 99.72 > > > >>>>>>> vm-127-disk-0 pve Vwi-aotz-- 50.00g > > > data > > > >>>>>>> 10.79 > > > >>>>>>> vm-129-disk-0 pve Vwi-aotz-- 20.00g > > > data > > > >>>>>>> 45.04 > > > >>>>>>> > > > >>>>>>> cat /etc/pve/storage.cfg > > > >>>>>>> dir: local > > > >>>>>>> path /var/lib/vz > > > >>>>>>> content backup,iso,vztmpl > > > >>>>>>> > > > >>>>>>> lvmthin: local-lvm > > > >>>>>>> thinpool data > > > >>>>>>> vgname pve > > > >>>>>>> content rootdir,images > > > >>>>>>> > > > >>>>>>> iscsi: iscsi > > > >>>>>>> portal some-portal > > > >>>>>>> target some-target > > > >>>>>>> content images > > > >>>>>>> > > > >>>>>>> lvm: iscsi-lvm > > > >>>>>>> vgname iscsi > > > >>>>>>> base iscsi:0.0.0.scsi-mpatha > > > >>>>>>> content rootdir,images > > > >>>>>>> shared 1 > > > >>>>>>> > > > >>>>>>> dir: backup > > > >>>>>>> path /backup > > > >>>>>>> content images,rootdir,iso,backup > > > >>>>>>> maxfiles 3 > > > >>>>>>> shared 0 > > > >>>>>>> --- > > > >>>>>>> Gilberto Nunes Ferreira > > > >>>>>>> > > > >>>>>>> (47) 3025-5907 > > > >>>>>>> (47) 99676-7530 - Whatsapp / Telegram > > > >>>>>>> > > > >>>>>>> Skype: gilberto.nunes36 > > > >>>>>>> > > > >>>>>>> > > > >>>>>>> > > > >>>>>>> > > > >>>>>>> > > > >>>>>>> Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza < > > > >>>> elacunza at binovo.es> > > > >>>>>>> escreveu: > > > >>>>>>> > > > >>>>>>>> Can you send the output for "lvs" and "cat > > /etc/pve/storage.cfg"? > > > >>>>>>>> > > > >>>>>>>> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: > > > >>>>>>>>> HI all > > > >>>>>>>>> > > > >>>>>>>>> Still in trouble with this issue > > > >>>>>>>>> > > > >>>>>>>>> cat daemon.log | grep "Feb 12 22:10" > > > >>>>>>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE > > replication > > > >>>>>>>> runner... > > > >>>>>>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE > > replication > > > >>>>>> runner. > > > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of > > VM > > > >> 110 > > > >>>>>>>> (qemu) > > > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 > > > >> failed - > > > >>>>>> no > > > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' > > > >>>>>>>>> > > > >>>>>>>>> syslog > > > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of > > VM > > > >> 110 > > > >>>>>>>> (qemu) > > > >>>>>>>>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: > > -lock > > > >>>>>> backup > > > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 > > > >> failed - > > > >>>>>> no > > > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' > > > >>>>>>>>> > > > >>>>>>>>> pveversion > > > >>>>>>>>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) > > > >>>>>>>>> > > > >>>>>>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) > > > >>>>>>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) > > > >>>>>>>>> pve-kernel-4.15: 5.4-12 > > > >>>>>>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52 > > > >>>>>>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36 > > > >>>>>>>>> corosync: 2.4.4-pve1 > > > >>>>>>>>> criu: 2.11.1-1~bpo90 > > > >>>>>>>>> glusterfs-client: 3.8.8-1 > > > >>>>>>>>> ksm-control-daemon: 1.2-2 > > > >>>>>>>>> libjs-extjs: 6.0.1-2 > > > >>>>>>>>> libpve-access-control: 5.1-12 > > > >>>>>>>>> libpve-apiclient-perl: 2.0-5 > > > >>>>>>>>> libpve-common-perl: 5.0-56 > > > >>>>>>>>> libpve-guest-common-perl: 2.0-20 > > > >>>>>>>>> libpve-http-server-perl: 2.0-14 > > > >>>>>>>>> libpve-storage-perl: 5.0-44 > > > >>>>>>>>> libqb0: 1.0.3-1~bpo9 > > > >>>>>>>>> lvm2: 2.02.168-pve6 > > > >>>>>>>>> lxc-pve: 3.1.0-7 > > > >>>>>>>>> lxcfs: 3.0.3-pve1 > > > >>>>>>>>> novnc-pve: 1.0.0-3 > > > >>>>>>>>> proxmox-widget-toolkit: 1.0-28 > > > >>>>>>>>> pve-cluster: 5.0-38 > > > >>>>>>>>> pve-container: 2.0-41 > > > >>>>>>>>> pve-docs: 5.4-2 > > > >>>>>>>>> pve-edk2-firmware: 1.20190312-1 > > > >>>>>>>>> pve-firewall: 3.0-22 > > > >>>>>>>>> pve-firmware: 2.0-7 > > > >>>>>>>>> pve-ha-manager: 2.0-9 > > > >>>>>>>>> pve-i18n: 1.1-4 > > > >>>>>>>>> pve-libspice-server1: 0.14.1-2 > > > >>>>>>>>> pve-qemu-kvm: 3.0.1-4 > > > >>>>>>>>> pve-xtermjs: 3.12.0-1 > > > >>>>>>>>> qemu-server: 5.0-55 > > > >>>>>>>>> smartmontools: 6.5+svn4324-1 > > > >>>>>>>>> spiceterm: 3.0-5 > > > >>>>>>>>> vncterm: 1.5-3 > > > >>>>>>>>> zfsutils-linux: 0.7.13-pve1~bpo2 > > > >>>>>>>>> > > > >>>>>>>>> > > > >>>>>>>>> Some help??? Sould I upgrade the server to 6.x?? > > > >>>>>>>>> > > > >>>>>>>>> Thanks > > > >>>>>>>>> > > > >>>>>>>>> --- > > > >>>>>>>>> Gilberto Nunes Ferreira > > > >>>>>>>>> > > > >>>>>>>>> (47) 3025-5907 > > > >>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram > > > >>>>>>>>> > > > >>>>>>>>> Skype: gilberto.nunes36 > > > >>>>>>>>> > > > >>>>>>>>> > > > >>>>>>>>> > > > >>>>>>>>> > > > >>>>>>>>> > > > >>>>>>>>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < > > > >>>>>>>>> gilberto.nunes32 at gmail.com> escreveu: > > > >>>>>>>>> > > > >>>>>>>>>> Hi there > > > >>>>>>>>>> > > > >>>>>>>>>> I got a strage error last night. Vzdump complain about the > > > >>>>>>>>>> disk no exist or lvm volume in this case but the volume > exist, > > > >>>> indeed! > > > >>>>>>>>>> In the morning I have do a manually backup and it's working > > > >> fine... > > > >>>>>>>>>> Any advice? > > > >>>>>>>>>> > > > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 > > (qemu) > > > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: status = running > > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup > > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165 > > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' > > > >>>>>>>> 'local-lvm:vm-112-disk-0' 120G > > > >>>>>>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no > > > such > > > >>>>>>>> volume 'local-lvm:vm-112-disk-0' > > > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 > > (qemu) > > > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: status = running > > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup > > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 > > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' > > > >>>>>>>> 'local-lvm:vm-116-disk-0' 100G > > > >>>>>>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no > > > such > > > >>>>>>>> volume 'local-lvm:vm-116-disk-0' > > > >>>>>>>>>> --- > > > >>>>>>>>>> Gilberto Nunes Ferreira > > > >>>>>>>>>> > > > >>>>>>>>>> (47) 3025-5907 > > > >>>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram > > > >>>>>>>>>> > > > >>>>>>>>>> Skype: gilberto.nunes36 > > > >>>>>>>>>> > > > >>>>>>>>>> > > > >>>>>>>>>> > > > >>>>>>>>>> > > > >>>>>>>>> _______________________________________________ > > > >>>>>>>>> pve-user mailing list > > > >>>>>>>>> pve-user at pve.proxmox.com > > > >>>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > >>>>>>>> -- > > > >>>>>>>> Zuzendari Teknikoa / Director T?cnico > > > >>>>>>>> Binovo IT Human Project, S.L. > > > >>>>>>>> Telf. 943569206 > > > >>>>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun > > > (Gipuzkoa) > > > >>>>>>>> www.binovo.es > > > >>>>>>>> > > > >>>>>>>> _______________________________________________ > > > >>>>>>>> pve-user mailing list > > > >>>>>>>> pve-user at pve.proxmox.com > > > >>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > >>>>>>>> > > > >>>>>>> _______________________________________________ > > > >>>>>>> pve-user mailing list > > > >>>>>>> pve-user at pve.proxmox.com > > > >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > >>>>>> -- > > > >>>>>> Zuzendari Teknikoa / Director T?cnico > > > >>>>>> Binovo IT Human Project, S.L. > > > >>>>>> Telf. 943569206 > > > >>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun > > (Gipuzkoa) > > > >>>>>> www.binovo.es > > > >>>>>> > > > >>>>>> _______________________________________________ > > > >>>>>> pve-user mailing list > > > >>>>>> pve-user at pve.proxmox.com > > > >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > >>>>>> > > > >>>>> _______________________________________________ > > > >>>>> pve-user mailing list > > > >>>>> pve-user at pve.proxmox.com > > > >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > >>>> -- > > > >>>> Zuzendari Teknikoa / Director T?cnico > > > >>>> Binovo IT Human Project, S.L. > > > >>>> Telf. 943569206 > > > >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun > (Gipuzkoa) > > > >>>> www.binovo.es > > > >>>> > > > >>>> _______________________________________________ > > > >>>> pve-user mailing list > > > >>>> pve-user at pve.proxmox.com > > > >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > >>>> > > > >>> _______________________________________________ > > > >>> pve-user mailing list > > > >>> pve-user at pve.proxmox.com > > > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > >> > > > >> -- > > > >> Zuzendari Teknikoa / Director T?cnico > > > >> Binovo IT Human Project, S.L. > > > >> Telf. 943569206 > > > >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > > > >> www.binovo.es > > > >> > > > >> _______________________________________________ > > > >> pve-user mailing list > > > >> pve-user at pve.proxmox.com > > > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > >> > > > > _______________________________________________ > > > > pve-user mailing list > > > > pve-user at pve.proxmox.com > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From smr at kmi.com Wed Feb 19 16:13:22 2020 From: smr at kmi.com (Stefan M. Radman) Date: Wed, 19 Feb 2020 15:13:22 +0000 Subject: [PVE-User] pvelocalhost In-Reply-To: <7e20bebb-98ce-b15c-2191-e72b1bde8499@proxmox.com> References: <7e20bebb-98ce-b15c-2191-e72b1bde8499@proxmox.com> Message-ID: Thanks. That makes cluster hostfile maintenance much easier. > On Feb 19, 2020, at 14:30, Mira Limbeck wrote: > > It is neither used nor part of newer installations (5.3+ I think?). You can remove it. > CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From kazim.koybasi at gmail.com Thu Feb 20 07:35:17 2020 From: kazim.koybasi at gmail.com (Kazim Koybasi) Date: Thu, 20 Feb 2020 09:35:17 +0300 Subject: [PVE-User] Virtual Manager management per user configuration Message-ID: Hello, We would like to give a virtual machine service to our users in our campus so that they can create their own virtual machine and see only their own virtual machine. I found that it is possible from command line or with root access from Proxmox interface. Is it possible to create an environment an give permission per user with Proxmox so that they can create and only see their own virtual machine? Best Regards. From d.csapak at proxmox.com Thu Feb 20 08:04:13 2020 From: d.csapak at proxmox.com (Dominik Csapak) Date: Thu, 20 Feb 2020 08:04:13 +0100 Subject: [PVE-User] Virtual Manager management per user configuration In-Reply-To: References: Message-ID: <98e232a8-92eb-9231-c0be-3740262838fa@proxmox.com> On 2/20/20 7:35 AM, Kazim Koybasi wrote: > Hello, > > We would like to give a virtual machine service to our users in our campus > so that they can create their own virtual machine and see only their own > virtual machine. I found that it is possible from command line or with root > access from Proxmox interface. Is it possible to create an environment an > give permission per user with Proxmox so that they can create and only see > their own virtual machine? > Hi, this is not comfortably doable, for the following reasons for creating a vm, a user has to have: * allocate rights on the storage for the vm disks (which will give him also rights to see/edit/destroy all other disks on that storage) * allocate rights on /vms/{ID} which you can create beforehand, but there is not 'pool', iow the user has to use the assigned ids additionally, there is no mechanism for limiting resources per user (e.g. only some amount of cores) also, when deleting the vm, the acls to that vm will also get removed, meaning if you given a user the right to /vms/100 and he deletes the vm 100, he no longer has the rights to it finally, there is generally no concept of resource 'ownership' for users only privileges and acls if you can workaround/ignore/accept those issues, you should be fine, otherwise i would suggest either using or creating a seperate interface which handles all of that with the API[0] (handling ownership, limiting api calls, etc) hope this helps regards Dominik 0: https://pve.proxmox.com/wiki/Proxmox_VE_API From a.antreich at proxmox.com Thu Feb 20 09:33:29 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Thu, 20 Feb 2020 09:33:29 +0100 Subject: [PVE-User] How to restart ceph-mon? In-Reply-To: <20200219110544.GD6251@sv.lnf.it> References: <20200219103906.GC6251@sv.lnf.it> <20200219105503.GC2117767@dona.proxmox.com> <20200219110544.GD6251@sv.lnf.it> Message-ID: <20200220083329.GD2117767@dona.proxmox.com> On Wed, Feb 19, 2020 at 12:05:44PM +0100, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > What does the status of the service show? > > systemctl status ceph-mon at 3.service > > Uh, never minded about that, damn me! > > root at thor:~# systemctl status ceph-mon at 3.service > ? ceph-mon at 3.service - Ceph cluster monitor daemon > Loaded: loaded (/lib/systemd/system/ceph-mon at .service; disabled; vendor preset: enabled) > Drop-In: /lib/systemd/system/ceph-mon at .service.d > ??ceph-after-pve-cluster.conf > Active: failed (Result: exit-code) since Wed 2020-02-19 11:31:47 CET; 29min ago > Process: 3434884 ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id 3 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE) > Main PID: 3434884 (code=exited, status=1/FAILURE) > > Feb 19 11:31:37 thor systemd[1]: ceph-mon at 3.service: Failed with result 'exit-code'. > Feb 19 11:31:47 thor systemd[1]: ceph-mon at 3.service: Service hold-off time over, scheduling restart. > Feb 19 11:31:47 thor systemd[1]: Stopped Ceph cluster monitor daemon. > Feb 19 11:31:47 thor systemd[1]: ceph-mon at 3.service: Start request repeated too quickly. > Feb 19 11:31:47 thor systemd[1]: Failed to start Ceph cluster monitor daemon. > Feb 19 11:31:47 thor systemd[1]: ceph-mon at 3.service: Unit entered failed state. > Feb 19 11:31:47 thor systemd[1]: ceph-mon at 3.service: Failed with result 'exit-code'. > > I've tried: > > systemctl stop ceph-mon at 3.service > > but old daemon is still alive: > > root at thor:~# systemctl stop ceph-mon at 3.service > root at thor:~# ps aux | grep ceph-[m]on > ceph 2469 0.7 0.1 539704 67408 ? Ssl 2019 918:08 /usr/bin/ceph-mon -i 3 --pid-file /var/run/ceph/mon.3.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph > > it is time to kill it? I suppose you did that already. Did it work? -- Cheers, Alwin From elacunza at binovo.es Thu Feb 20 10:05:12 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 20 Feb 2020 10:05:12 +0100 Subject: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 Message-ID: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> Hi all, On february 11th we upgraded a PVE 5.3 cluster to 5.4, then to 6.1 . This is an hyperconverged cluster with 3 servers, redundant network, Ceph with two storage pools, one HDD based and the other SSD based: Each server consists of: - Dell R530 - 1xXeon E5-2620 8c/16t 2.1Ghz - 64GB RAM - 4x1Gbit ethernet (2 bonds) - 2x10Gbit ethernet (1 bond) - 1xIntel S4500 480GB - System + Bluestore DB for HDDs - 1xIntel S4500 480GB - Bluestore OSD - 4x1TB HDD - Bluestore OSD (with 30GB db on SSD) There are two Dell n1224T switches, each bond has one interface to each switch. Bonds are Active/passive, all active interfaces are on the same switch. vmbr0 is on a 2x1Gbit bond0 Ceph public and private are on 2x10Gbit bond2 Backup network is IPv6 on 2x1Gbit bond1, to a Synology NAS. SSD disk wearout is at 0%. It seems that since the upgrade, were're experiencing network connectivity issues in the night, during the backup window. We think that the backups may be the issue; until yesterday backups were done over vmbr0 with IPv4; as they nearly saturated the 1Gbit link, we changed the network and storage configuration so that backup NAS access was done over bond1, as it wasn't used previously. We're using IPv6 now because Synology can't configure two IPv4 on a bond from the GUI. But it seems the issue has happened again tonight (SQL Server connection drop). VM has network connectivity on the morning, so it isn't a permanent problem. We tried running the main VM backup yesterday morning, but couldn't reproduce the issue, although during regular backup all 3 nodes are doing backups and in the test we only performed the backup of the only VM storaged on SSD pool. This VM has 8vcores, 10GB of RAM, one disk Virtio scsi0 300GB cache=writeback, network is e1000. Backup reports: NFO: status: 100% (322122547200/322122547200), sparse 22% (72698785792), duration 2416, read/write 3650/0 MB/s INFO: transferred 322122 MB in 2416 seconds (133 MB/s) And peaks like: INFO: status: 70% (225552891904/322122547200), sparse 3% (12228284416), duration 2065, read/write 181/104 MB/s INFO: status: 71% (228727980032/322122547200), sparse 3% (12228317184), duration 2091, read/write 122/122 MB/s INFO: status: 72% (232054063104/322122547200), sparse 3% (12228349952), duration 2118, read/write 123/123 MB/s INFO: status: 73% (235237539840/322122547200), sparse 3% (12230103040), duration 2147, read/write 109/109 MB/s INFO: status: 74% (238500708352/322122547200), sparse 3% (12237438976), duration 2177, read/write 108/108 MB/s Also, during backup we see the following messages in syslog of the physical node: Feb 20 00:00:18 sotllo pve-ha-lrm[3930696]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - got timeout Feb 20 00:00:18 sotllo pve-ha-lrm[3930696]: VM 103 qmp command 'query-status' failed - got timeout#012 Feb 20 00:00:28 sotllo pve-ha-lrm[3930759]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries Feb 20 00:00:28 sotllo pve-ha-lrm[3930759]: VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries#012 Feb 20 00:00:38 sotllo pve-ha-lrm[3930822]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries Feb 20 00:00:38 sotllo pve-ha-lrm[3930822]: VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries#012 [...] Feb 20 00:40:38 sotllo pve-ha-lrm[3948846]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - got timeout Feb 20 00:40:38 sotllo pve-ha-lrm[3948846]: VM 103 qmp command 'query-status' failed - got timeout#012 Feb 20 00:41:28 sotllo pve-ha-lrm[3949193]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - got timeout Feb 20 00:41:28 sotllo pve-ha-lrm[3949193]: VM 103 qmp command 'query-status' failed - got timeout#012 So it seems backup is having a big impact on the VM. This is only seen for 3 of the 4 VMs in HA, but for the other VMs it is just logged twice, and not everyday (there're on the HDD pool). For this VM there are lots of logs everyday. CPU during backup is low in the physical server, about 1.5-3.5 max load and 10% max use. Although it has been working fine until now, maybe e1000 emulation could be the issue? We'll have to schedule downtime but can try to change to virtio. Any other ideas about what could be producing the issue? Thanks a lot for reading through here!! All three nodes have the same versions: root at sotllo:~# pveversion -v proxmox-ve: 6.1-2 (running kernel: 5.3.13-3-pve) pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e) pve-kernel-5.3: 6.1-3 pve-kernel-helper: 6.1-3 pve-kernel-4.15: 5.4-12 pve-kernel-5.3.13-3-pve: 5.3.13-3 pve-kernel-4.15.18-24-pve: 4.15.18-52 pve-kernel-4.15.18-10-pve: 4.15.18-32 pve-kernel-4.13.13-5-pve: 4.13.13-38 pve-kernel-4.13.13-2-pve: 4.13.13-33 ceph: 14.2.6-pve1 ceph-fuse: 14.2.6-pve1 corosync: 3.0.2-pve4 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35+pve1 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.13-pve1 libpve-access-control: 6.0-6 libpve-apiclient-perl: 3.0-2 libpve-common-perl: 6.0-11 libpve-guest-common-perl: 3.0-3 libpve-http-server-perl: 3.0-4 libpve-storage-perl: 6.1-4 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 3.2.1-1 lxcfs: 3.0.3-pve60 novnc-pve: 1.1.0-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.1-3 pve-cluster: 6.1-4 pve-container: 3.0-19 pve-docs: 6.1-4 pve-edk2-firmware: 2.20191127-1 pve-firewall: 4.0-10 pve-firmware: 3.0-4 pve-ha-manager: 3.0-8 pve-i18n: 2.0-4 pve-qemu-kvm: 4.1.1-2 pve-xtermjs: 4.3.0-1 qemu-server: 6.1-5 smartmontools: 7.1-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.3-pve1 -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From devzero at web.de Thu Feb 20 13:40:33 2020 From: devzero at web.de (Roland @web.de) Date: Thu, 20 Feb 2020 13:40:33 +0100 Subject: [PVE-User] question regarding quorum/qdevice on raspberry pi for dual-node cluster Message-ID: hello, on this page https://pve.proxmox.com/wiki/Raspberry_Pi_as_third_node? there is told: "Raspberry Pi as third node - This is only suited for testing or homelab use. Never use it in a production environment! Simply use a QDevice!"" On https://pve.proxmox.com/wiki/Roadmap? for Proxmox 5.4 there is the following announcement: QDevice support via `pvecm` * primarily for small 2-node clusters adding a qdevice can help mitigate the downside of not being able to reboot one node without losing quorum (and thus the ability to make any changes in the cluster) ** Can also help in clusters with a larger even number of nodes by providing a tie-break vote. * Integration into pvecm and PVE stack vastly simplifies adding a qdevice (it was possible manually before as well) my question is why there is? information around which looks like discouraging rasperry pi as third node for quorum, i.e.? is a raspberry pi with Qdevice? ( ->https://blog.jenningsga.com/proxmox-keeping-quorum-with-qdevices/ ) suitable for production use or not ? regards roland From gianni.milo22 at gmail.com Thu Feb 20 13:48:10 2020 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Thu, 20 Feb 2020 12:48:10 +0000 Subject: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 In-Reply-To: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> References: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> Message-ID: Hello, See comments below... vmbr0 is on a 2x1Gbit bond0 > Ceph public and private are on 2x10Gbit bond2 > Backup network is IPv6 on 2x1Gbit bond1, to a Synology NAS. > Where's the cluster (corosync) traffic flowing ? On vmbr0 ? Would be a good idea to split that as well if possible (perhaps by using a different VLAN?). > We think that the backups may be the issue; until yesterday backups were > done over vmbr0 with IPv4; as they nearly saturated the 1Gbit link, we > changed the network and storage configuration so that backup NAS access > was done over bond1, as it wasn't used previously. We're using IPv6 now > because Synology can't configure two IPv4 on a bond from the GUI. > Using a separate network for the backup traffic should always help, that's a good decision. I'm having difficulties understanding why you had to configure 2 IPv4 addresses on a single bond, why you need 2 of them? > But it seems the issue has happened again tonight (SQL Server connection > drop). VM has network connectivity on the morning, so it isn't a > permanent problem. > Do the affected VMs listen to the vmbr0 network for "outside" communication ? Is that the interface where the SQL server is accepting the connections from ? > We tried running the main VM backup yesterday morning, but couldn't > reproduce the issue, although during regular backup all 3 nodes are > doing backups and in the test we only performed the backup of the only > VM storaged on SSD pool. > How about reducing (or scheduling at different times) the backup jobs on each node, at least for testing if the backup is causing the problem. Backup reports: > NFO: status: 100% (322122547200/322122547200), sparse 22% (72698785792), > duration 2416, read/write 3650/0 MB/s > INFO: transferred 322122 MB in 2416 seconds (133 MB/s) > > And peaks like: > INFO: status: 70% (225552891904/322122547200), sparse 3% (12228284416), > duration 2065, read/write 181/104 MB/s > Have you tried setting (bandwidth) limits on the backup jobs and see if that helps ? > Feb 20 00:00:38 sotllo pve-ha-lrm[3930822]: VM 103 qmp command failed - > VM 103 qmp command 'query-status' failed - unable to connect to VM 103 > qmp socket - timeout after 31 retries > Feb 20 00:00:38 sotllo pve-ha-lrm[3930822]: VM 103 qmp command > 'query-status' failed - unable to connect to VM 103 qmp socket - timeout > after 31 retries#012 > [...] > Looks like the host resources (where this specific VM is running on) are exhausted at this point, or perhaps the VM itself is overloaded somehow. > So it seems backup is having a big impact on the VM. Yes, indeed... > This is only seen > for 3 of the 4 VMs in HA, but for the other VMs it is just logged twice, > and not everyday (there're on the HDD pool). For this VM there are lots > of logs everyday. > Are there any scheduled (I/O intensive) jobs running within these VMs at the same time where host(s) is trying to back them up ? > CPU during backup is low in the physical server, about 1.5-3.5 max load > and 10% max use. > How about the storage (Ceph pools in this case) I/O where these VMs are running on ? Are they struggling during the backup time ? Although it has been working fine until now, maybe e1000 emulation could > be the issue? We'll have to schedule downtime but can try to change to > virtio. > If that's what all affected VMs have in common, then yes definitely that could be one of the reasons(even though you mentioned that they were working fine before the PVE upgrade?). Is there a specific reason you need e1000 emulation? virtio performs much better. G. From elacunza at binovo.es Thu Feb 20 14:47:50 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 20 Feb 2020 14:47:50 +0100 Subject: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 In-Reply-To: References: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> Message-ID: Hi Gianni, El 20/2/20 a las 13:48, Gianni Milo escribi?: > See comments below... Thanks for the comments! > vmbr0 is on a 2x1Gbit bond0 >> Ceph public and private are on 2x10Gbit bond2 >> Backup network is IPv6 on 2x1Gbit bond1, to a Synology NAS. >> > Where's the cluster (corosync) traffic flowing ? On vmbr0 ? Would be a good > idea to split that as well if possible (perhaps by using a different VLAN?). Yes, it's on vmbr0 (bond0). We haven't noted any cluster issues, VMs don't have much network traffic. > >> We think that the backups may be the issue; until yesterday backups were >> done over vmbr0 with IPv4; as they nearly saturated the 1Gbit link, we >> changed the network and storage configuration so that backup NAS access >> was done over bond1, as it wasn't used previously. We're using IPv6 now >> because Synology can't configure two IPv4 on a bond from the GUI. >> > Using a separate network for the backup traffic should always help, that's > a good decision. > I'm having difficulties understanding why you had to configure 2 IPv4 > addresses on a single bond, why you need 2 of them? The idea was to use a different subnet for backup traffic, and to configure it on bond1 of proxmox nodes, so bond1 was used instead of bond0 for backups. As NAS didn't support it (it only has 1 bond), we changed to IPv6 on bond1 for backup traffic (it's a routing issue, using different subnet is just for making it easy). Site is remote so I didn't want to loose current network config of the NAS. >> But it seems the issue has happened again tonight (SQL Server connection >> drop). VM has network connectivity on the morning, so it isn't a >> permanent problem. >> > Do the affected VMs listen to the vmbr0 network for "outside" communication > ? Is that the interface where the SQL server is accepting the connections > from ? Yes, we have only vmbr0 on the cluster, all VMs are connected to it, as is the outside world. (bond0) >> We tried running the main VM backup yesterday morning, but couldn't >> reproduce the issue, although during regular backup all 3 nodes are >> doing backups and in the test we only performed the backup of the only >> VM storaged on SSD pool. >> >> >> How about reducing (or scheduling at different times) the backup jobs on >> each node, at least for testing if the backup is causing the problem. I'll check with the site admin about this, didn't really think about this but could help determine if that is the issue, thanks! >> Backup reports: >> NFO: status: 100% (322122547200/322122547200), sparse 22% (72698785792), >> duration 2416, read/write 3650/0 MB/s >> INFO: transferred 322122 MB in 2416 seconds (133 MB/s) >> >> And peaks like: >> INFO: status: 70% (225552891904/322122547200), sparse 3% (12228284416), >> duration 2065, read/write 181/104 MB/s >> >> >> Have you tried setting (bandwidth) limits on the backup jobs and see if >> that helps ? Not really. I've looked through the docs, but seems I can only affect write bandwith on NAS (only has backups). This would affect read I guess... > >> Feb 20 00:00:38 sotllo pve-ha-lrm[3930822]: VM 103 qmp command failed - >> VM 103 qmp command 'query-status' failed - unable to connect to VM 103 >> qmp socket - timeout after 31 retries >> Feb 20 00:00:38 sotllo pve-ha-lrm[3930822]: VM 103 qmp command >> 'query-status' failed - unable to connect to VM 103 qmp socket - timeout >> after 31 retries#012 >> [...] >> > Looks like the host resources (where this specific VM is running on) are > exhausted at this point, or perhaps the VM itself is overloaded somehow. I can't see any indication of something like that in VM and proxmox node graphs though... :( VM is below 20% in CPU use... and node is even lower...I think a bluestore OSD should be able to use 4-6 cores before it hits limits? >> This is only seen >> for 3 of the 4 VMs in HA, but for the other VMs it is just logged twice, >> and not everyday (there're on the HDD pool). For this VM there are lots >> of logs everyday. >> > Are there any scheduled (I/O intensive) jobs running within these VMs at > the same time where host(s) is trying to back them up ? I don't think so, at least there' isn't any wait I/O at all on this VM (SSD pool). >> CPU during backup is low in the physical server, about 1.5-3.5 max load >> and 10% max use. > How about the storage (Ceph pools in this case) I/O where these VMs are > running on ? Are they struggling during the backup time ? I don't have this data, but looking at CPU use I don't expect this to be the case, storage of the VM is SSD. If disk/ceph were the issue, I'd expect much more CPU use in physical nodes... > Although it has been working fine until now, maybe e1000 emulation could >> be the issue? We'll have to schedule downtime but can try to change to >> virtio. >> > If that's what all affected VMs have in common, then yes definitely that > could be one of the reasons(even though you mentioned that they were > working fine before the PVE upgrade?). Is there a specific reason you need > e1000 emulation? virtio performs much better. Most VMs where P2V, so e1000 seemed the natural choice to minimize hardware changes on Windows hosts. That worked really well so didn't look to change them and VMs are not very network intensive. Thanks a lot for your comments, I'd look with site managers to change backup schedule. Some nodes neet 6-7 hours so that won't be trivial, but we'd be able to extract info from that. Will report back when we have more data. Regards Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From a.lauterer at proxmox.com Thu Feb 20 14:56:59 2020 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Thu, 20 Feb 2020 14:56:59 +0100 Subject: [PVE-User] question regarding quorum/qdevice on raspberry pi for dual-node cluster In-Reply-To: References: Message-ID: <690405f9-c744-0e96-a2b5-8fd48d893054@proxmox.com> That wiki page describes how to set up a full corosync instance on the RPI. Using one with Qdevice and the corosync-qnetd service installed on it is fine. See our documentation for it [0]. I updated the warning on the Wiki page to make it clearer why that wiki page needs to be considered with care. [0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support On 2/20/20 1:40 PM, Roland @web.de wrote: > hello, > > on this page https://pve.proxmox.com/wiki/Raspberry_Pi_as_third_node > there is told: > > "Raspberry Pi as third node - This is only suited for testing or homelab > use. Never use it in a production environment! Simply use a QDevice!"" > > On https://pve.proxmox.com/wiki/Roadmap? for Proxmox 5.4 there is the > following announcement: > > QDevice support via `pvecm` > > ?* primarily for small 2-node clusters adding a qdevice can help > ?? mitigate the downside of not being able to reboot one node without > ?? losing quorum (and thus the ability to make any changes in the > ?? cluster) ** Can also help in clusters with a larger even number of > ?? nodes by providing a tie-break vote. > ?* Integration into pvecm and PVE stack vastly simplifies adding a > ?? qdevice (it was possible manually before as well) > > > my question is why there is? information around which looks like > discouraging rasperry pi as third node for quorum, i.e.? is a raspberry > pi with Qdevice? ( > ->https://blog.jenningsga.com/proxmox-keeping-quorum-with-qdevices/ ) > suitable for production use or not ? > > regards > roland > > > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gaio at sv.lnf.it Thu Feb 20 15:14:01 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Thu, 20 Feb 2020 15:14:01 +0100 Subject: [PVE-User] How to restart ceph-mon? In-Reply-To: <20200220083329.GD2117767@dona.proxmox.com> References: <20200219103906.GC6251@sv.lnf.it> <20200219105503.GC2117767@dona.proxmox.com> <20200219110544.GD6251@sv.lnf.it> <20200220083329.GD2117767@dona.proxmox.com> Message-ID: <20200220141401.GH2769@sv.lnf.it> Mandi! Alwin Antreich In chel di` si favelave... > > it is time to kill it? > I suppose you did that already. Did it work? No, i've done just now. But yes, a 'kill' worked. Monitor restarted. Only a little note. On boot, monitor run with this cmdline: root at deadpool:~# ps aux | grep ceph-[m]on ceph 2402 0.6 2.1 808428 356540 ? Ssl feb18 16:43 /usr/bin/ceph-mon -i 4 --pid-file /var/run/ceph/mon.4.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph on 'systemctl start ceph-mon@.service', monitor run with: root at hulk:~# ps aux | grep ceph-[m]on ceph 3276772 36.0 0.7 768580 357484 ? Ssl 15:05 0:04 /usr/bin/ceph-mon -f --cluster ceph --id 2 --setuser ceph --setgroup ceph eg, without '--pid-file /var/run/ceph/mon.4.pid -c /etc/ceph/ceph.conf' and with a different cmdline. FYI. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From devzero at web.de Thu Feb 20 15:20:48 2020 From: devzero at web.de (Roland @web.de) Date: Thu, 20 Feb 2020 15:20:48 +0100 Subject: [PVE-User] question regarding quorum/qdevice on raspberry pi for dual-node cluster In-Reply-To: <690405f9-c744-0e96-a2b5-8fd48d893054@proxmox.com> References: <690405f9-c744-0e96-a2b5-8fd48d893054@proxmox.com> Message-ID: <85f25230-1097-81e6-b47d-808926dfb737@web.de> thank you, why would we need a full corosync instance on RPI at all when it's not supported and Qdevice is the better way for quorum ? i.e. who is using full corosync on RPI at all and for what besides quorum/whitness device? i there a real purpose/benefit or does that wiki page simply describe the "old way to go before qdevice did exist" ? regards roland Am 20.02.20 um 14:56 schrieb Aaron Lauterer: > That wiki page describes how to set up a full corosync instance on the > RPI. > > Using one with Qdevice and the corosync-qnetd service installed on it > is fine. See our documentation for it [0]. > > I updated the warning on the Wiki page to make it clearer why that > wiki page needs to be considered with care. > > > [0] > https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support > > On 2/20/20 1:40 PM, Roland @web.de wrote: >> hello, >> >> on this page https://pve.proxmox.com/wiki/Raspberry_Pi_as_third_node >> there is told: >> >> "Raspberry Pi as third node - This is only suited for testing or homelab >> use. Never use it in a production environment! Simply use a QDevice!"" >> >> On https://pve.proxmox.com/wiki/Roadmap? for Proxmox 5.4 there is the >> following announcement: >> >> QDevice support via `pvecm` >> >> ??* primarily for small 2-node clusters adding a qdevice can help >> ??? mitigate the downside of not being able to reboot one node without >> ??? losing quorum (and thus the ability to make any changes in the >> ??? cluster) ** Can also help in clusters with a larger even number of >> ??? nodes by providing a tie-break vote. >> ??* Integration into pvecm and PVE stack vastly simplifies adding a >> ??? qdevice (it was possible manually before as well) >> >> >> my question is why there is? information around which looks like >> discouraging rasperry pi as third node for quorum, i.e.? is a raspberry >> pi with Qdevice? ( >> ->https://blog.jenningsga.com/proxmox-keeping-quorum-with-qdevices/ ) >> suitable for production use or not ? >> >> regards >> roland >> >> >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gilberto.nunes32 at gmail.com Thu Feb 20 16:26:05 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 20 Feb 2020 12:26:05 -0300 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: <0eaec1f1-f6e9-1c58-2595-dced38bf9932@binovo.es> <754b8bc0-419b-d9e5-0c11-400b25e1d916@lightspeed.ca> Message-ID: Any advice? --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qua., 19 de fev. de 2020 ?s 11:01, Gilberto Nunes < gilberto.nunes32 at gmail.com> escreveu: > HI there > > I change the bwlimit to 100000 inside /etc/vzdump and vzdump works > normally for a couple of days and it's make happy. > Now, I have the error again! No logs, no explanation! Just error pure and > simple: > > 110: 2020-02-18 22:18:06 INFO: Starting Backup of VM 110 (qemu) > 110: 2020-02-18 22:18:06 INFO: status = running > 110: 2020-02-18 22:18:07 INFO: update VM 110: -lock backup > 110: 2020-02-18 22:18:07 INFO: VM Name: cliente-V-110-IP-163 > 110: 2020-02-18 22:18:07 INFO: include disk 'scsi0' 'local-lvm:vm-110-disk-0' 100G > 110: 2020-02-18 22:18:57 ERROR: Backup of VM 110 failed - no such volume 'local-lvm:vm-110-disk-0' > > 112: 2020-02-18 22:19:00 INFO: Starting Backup of VM 112 (qemu) > 112: 2020-02-18 22:19:00 INFO: status = running > 112: 2020-02-18 22:19:01 INFO: update VM 112: -lock backup > 112: 2020-02-18 22:19:01 INFO: VM Name: cliente-V-112-IP-165 > 112: 2020-02-18 22:19:01 INFO: include disk 'scsi0' 'local-lvm:vm-112-disk-0' 120G > 112: 2020-02-18 22:19:31 ERROR: Backup of VM 112 failed - no such volume 'local-lvm:vm-112-disk-0' > > 116: 2020-02-18 22:19:31 INFO: Starting Backup of VM 116 (qemu) > 116: 2020-02-18 22:19:31 INFO: status = running > 116: 2020-02-18 22:19:32 INFO: update VM 116: -lock backup > 116: 2020-02-18 22:19:32 INFO: VM Name: cliente-V-IP-162 > 116: 2020-02-18 22:19:32 INFO: include disk 'scsi0' 'local-lvm:vm-116-disk-0' 100G > 116: 2020-02-18 22:20:05 ERROR: Backup of VM 116 failed - no such volume 'local-lvm:vm-116-disk-0' > > > > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em sex., 14 de fev. de 2020 ?s 14:22, Gianni Milo > escreveu: > >> If it's happening randomly, my best guess would be that it might be >> related >> to high i/o during the time frame that the backup takes place. >> Have you tried creating multiple backup schedules which will take place at >> different times ? Setting backup bandwidth limits might also help. >> Check the PVE administration guide for more details on this. You could >> check for any clues in syslog during the time that the failed backup takes >> place as well. >> >> G. >> >> On Fri, 14 Feb 2020 at 14:35, Gilberto Nunes >> wrote: >> >> > HI guys >> > >> > Some problem but with two different vms... >> > I also update Proxmox still in 5.x series, but no changes... Now this >> > problem ocurrs twice, one night after other... >> > I am very concerned about it! >> > Please, Proxmox staff, is there something I can do to solve this issue? >> > Anybody alread do a bugzilla??? >> > >> > Thanks >> > --- >> > Gilberto Nunes Ferreira >> > >> > (47) 3025-5907 >> > (47) 99676-7530 - Whatsapp / Telegram >> > >> > Skype: gilberto.nunes36 >> > >> > >> > >> > >> > >> > Em qui., 13 de fev. de 2020 ?s 19:53, Atila Vasconcelos < >> > atilav at lightspeed.ca> escreveu: >> > >> > > Hi, >> > > >> > > I had the same problem in the past and it repeats once a while.... its >> > > very random; I could not find any way to reproduce it. >> > > >> > > But as it happens... it will go away. >> > > >> > > When you are almost forgetting about it, it will come again ;) >> > > >> > > I just learned to ignore it (and do manually the backup when it fails) >> > > >> > > I see in proxmox 6.x it is less frequent (but still happening once a >> > > while). >> > > >> > > >> > > ABV >> > > >> > > >> > > On 2020-02-13 4:42 a.m., Gilberto Nunes wrote: >> > > > Yeah! Me too... This problem is pretty random... Let see next week! >> > > > --- >> > > > Gilberto Nunes Ferreira >> > > > >> > > > (47) 3025-5907 >> > > > (47) 99676-7530 - Whatsapp / Telegram >> > > > >> > > > Skype: gilberto.nunes36 >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > Em qui., 13 de fev. de 2020 ?s 09:29, Eneko Lacunza < >> > elacunza at binovo.es> >> > > > escreveu: >> > > > >> > > >> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of >> > ideas >> > > >> now, sorry!!! ;) >> > > >> >> > > >> El 13/2/20 a las 13:24, Gilberto Nunes escribi?: >> > > >>> I can assure you... the disk is there! >> > > >>> >> > > >>> pvesm list local-lvm >> > > >>> local-lvm:vm-101-disk-0 raw 53687091200 101 >> > > >>> local-lvm:vm-102-disk-0 raw 536870912000 102 >> > > >>> local-lvm:vm-103-disk-0 raw 322122547200 103 >> > > >>> local-lvm:vm-104-disk-0 raw 214748364800 104 >> > > >>> local-lvm:vm-104-state-LUKPLAS raw 17704157184 104 >> > > >>> local-lvm:vm-105-disk-0 raw 751619276800 105 >> > > >>> local-lvm:vm-106-disk-0 raw 161061273600 106 >> > > >>> local-lvm:vm-107-disk-0 raw 536870912000 107 >> > > >>> local-lvm:vm-108-disk-0 raw 214748364800 108 >> > > >>> local-lvm:vm-109-disk-0 raw 107374182400 109 >> > > >>> local-lvm:vm-110-disk-0 raw 107374182400 110 >> > > >>> local-lvm:vm-111-disk-0 raw 107374182400 111 >> > > >>> local-lvm:vm-112-disk-0 raw 128849018880 112 >> > > >>> local-lvm:vm-113-disk-0 raw 53687091200 113 >> > > >>> local-lvm:vm-113-state-antes_balloon raw 17704157184 113 >> > > >>> local-lvm:vm-114-disk-0 raw 128849018880 114 >> > > >>> local-lvm:vm-115-disk-0 raw 107374182400 115 >> > > >>> local-lvm:vm-115-disk-1 raw 53687091200 115 >> > > >>> local-lvm:vm-116-disk-0 raw 107374182400 116 >> > > >>> local-lvm:vm-117-disk-0 raw 107374182400 117 >> > > >>> local-lvm:vm-118-disk-0 raw 107374182400 118 >> > > >>> local-lvm:vm-119-disk-0 raw 26843545600 119 >> > > >>> local-lvm:vm-121-disk-0 raw 107374182400 121 >> > > >>> local-lvm:vm-122-disk-0 raw 107374182400 122 >> > > >>> local-lvm:vm-123-disk-0 raw 161061273600 123 >> > > >>> local-lvm:vm-124-disk-0 raw 107374182400 124 >> > > >>> local-lvm:vm-125-disk-0 raw 53687091200 125 >> > > >>> local-lvm:vm-126-disk-0 raw 32212254720 126 >> > > >>> local-lvm:vm-127-disk-0 raw 53687091200 127 >> > > >>> local-lvm:vm-129-disk-0 raw 21474836480 129 >> > > >>> >> > > >>> ls -l /dev/pve/vm-110-disk-0 >> > > >>> lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> >> > > ../dm-15 >> > > >>> >> > > >>> >> > > >>> --- >> > > >>> Gilberto Nunes Ferreira >> > > >>> >> > > >>> (47) 3025-5907 >> > > >>> (47) 99676-7530 - Whatsapp / Telegram >> > > >>> >> > > >>> Skype: gilberto.nunes36 >> > > >>> >> > > >>> >> > > >>> >> > > >>> >> > > >>> >> > > >>> Em qui., 13 de fev. de 2020 ?s 09:19, Eneko Lacunza < >> > > elacunza at binovo.es> >> > > >>> escreveu: >> > > >>> >> > > >>>> What about: >> > > >>>> >> > > >>>> pvesm list local-lvm >> > > >>>> ls -l /dev/pve/vm-110-disk-0 >> > > >>>> >> > > >>>> El 13/2/20 a las 12:40, Gilberto Nunes escribi?: >> > > >>>>> Quite strange to say the least >> > > >>>>> >> > > >>>>> >> > > >>>>> ls /dev/pve/* >> > > >>>>> /dev/pve/root /dev/pve/vm-109-disk-0 >> > > >>>>> /dev/pve/vm-118-disk-0 >> > > >>>>> /dev/pve/swap /dev/pve/vm-110-disk-0 >> > > >>>>> /dev/pve/vm-119-disk-0 >> > > >>>>> /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 >> > > >>>>> /dev/pve/vm-121-disk-0 >> > > >>>>> /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 >> > > >>>>> /dev/pve/vm-122-disk-0 >> > > >>>>> /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 >> > > >>>>> /dev/pve/vm-123-disk-0 >> > > >>>>> /dev/pve/vm-104-disk-0 >> /dev/pve/vm-113-state-antes_balloon >> > > >>>>> /dev/pve/vm-124-disk-0 >> > > >>>>> /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 >> > > >>>>> /dev/pve/vm-125-disk-0 >> > > >>>>> /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 >> > > >>>>> /dev/pve/vm-126-disk-0 >> > > >>>>> /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 >> > > >>>>> /dev/pve/vm-127-disk-0 >> > > >>>>> /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 >> > > >>>>> /dev/pve/vm-129-disk-0 >> > > >>>>> /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 >> > > >>>>> >> > > >>>>> ls /dev/mapper/ >> > > >>>>> control pve-vm--104--state--LUKPLAS >> > > >>>>> pve-vm--115--disk--1 >> > > >>>>> iscsi-backup pve-vm--105--disk--0 >> > > >>>>> pve-vm--116--disk--0 >> > > >>>>> mpatha pve-vm--106--disk--0 >> > > >>>>> pve-vm--117--disk--0 >> > > >>>>> pve-data pve-vm--107--disk--0 >> > > >>>>> pve-vm--118--disk--0 >> > > >>>>> pve-data_tdata pve-vm--108--disk--0 >> > > >>>>> pve-vm--119--disk--0 >> > > >>>>> pve-data_tmeta pve-vm--109--disk--0 >> > > >>>>> pve-vm--121--disk--0 >> > > >>>>> pve-data-tpool pve-vm--110--disk--0 >> > > >>>>> pve-vm--122--disk--0 >> > > >>>>> pve-root pve-vm--111--disk--0 >> > > >>>>> pve-vm--123--disk--0 >> > > >>>>> pve-swap pve-vm--112--disk--0 >> > > >>>>> pve-vm--124--disk--0 >> > > >>>>> pve-vm--101--disk--0 pve-vm--113--disk--0 >> > > >>>>> pve-vm--125--disk--0 >> > > >>>>> pve-vm--102--disk--0 pve-vm--113--state--antes_balloon >> > > >>>>> pve-vm--126--disk--0 >> > > >>>>> pve-vm--103--disk--0 pve-vm--114--disk--0 >> > > >>>>> pve-vm--127--disk--0 >> > > >>>>> pve-vm--104--disk--0 pve-vm--115--disk--0 >> > > >>>>> pve-vm--129--disk--0 >> > > >>>>> >> > > >>>>> >> > > >>>>> --- >> > > >>>>> Gilberto Nunes Ferreira >> > > >>>>> >> > > >>>>> (47) 3025-5907 >> > > >>>>> (47) 99676-7530 - Whatsapp / Telegram >> > > >>>>> >> > > >>>>> Skype: gilberto.nunes36 >> > > >>>>> >> > > >>>>> >> > > >>>>> >> > > >>>>> >> > > >>>>> >> > > >>>>> Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza < >> > > >> elacunza at binovo.es> >> > > >>>>> escreveu: >> > > >>>>> >> > > >>>>>> It's quite strange, what about "ls /dev/pve/*"? >> > > >>>>>> >> > > >>>>>> El 13/2/20 a las 12:18, Gilberto Nunes escribi?: >> > > >>>>>>> n: Thu Feb 13 07:06:19 2020 >> > > >>>>>>> a2web:~# lvs >> > > >>>>>>> LV VG Attr LSize >> > > Pool >> > > >>>> Origin >> > > >>>>>>> Data% Meta% Move Log Cpy%Sync Convert >> > > >>>>>>> backup iscsi -wi-ao---- >> 1.61t >> > > >>>>>>> >> > > >>>>>>> data pve twi-aotz-- >> 3.34t >> > > >>>>>>> 88.21 9.53 >> > > >>>>>>> root pve -wi-ao---- >> 96.00g >> > > >>>>>>> >> > > >>>>>>> snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k >> 200.00g >> > > data >> > > >>>>>>> vm-104-disk-0 >> > > >>>>>>> snap_vm-113-disk-0_antes_balloon pve Vri---tz-k >> 50.00g >> > > data >> > > >>>>>>> vm-113-disk-0 >> > > >>>>>>> swap pve -wi-ao---- >> 8.00g >> > > >>>>>>> >> > > >>>>>>> vm-101-disk-0 pve Vwi-aotz-- >> 50.00g >> > > data >> > > >>>>>>> 24.17 >> > > >>>>>>> vm-102-disk-0 pve Vwi-aotz-- >> 500.00g >> > > data >> > > >>>>>>> 65.65 >> > > >>>>>>> vm-103-disk-0 pve Vwi-aotz-- >> 300.00g >> > > data >> > > >>>>>>> 37.28 >> > > >>>>>>> vm-104-disk-0 pve Vwi-aotz-- >> 200.00g >> > > data >> > > >>>>>>> 17.87 >> > > >>>>>>> vm-104-state-LUKPLAS pve Vwi-a-tz-- >> 16.49g >> > > data >> > > >>>>>>> 35.53 >> > > >>>>>>> vm-105-disk-0 pve Vwi-aotz-- >> 700.00g >> > > data >> > > >>>>>>> 90.18 >> > > >>>>>>> vm-106-disk-0 pve Vwi-aotz-- >> 150.00g >> > > data >> > > >>>>>>> 93.55 >> > > >>>>>>> vm-107-disk-0 pve Vwi-aotz-- >> 500.00g >> > > data >> > > >>>>>>> 98.20 >> > > >>>>>>> vm-108-disk-0 pve Vwi-aotz-- >> 200.00g >> > > data >> > > >>>>>>> 98.02 >> > > >>>>>>> vm-109-disk-0 pve Vwi-aotz-- >> 100.00g >> > > data >> > > >>>>>>> 93.68 >> > > >>>>>>> vm-110-disk-0 pve Vwi-aotz-- >> 100.00g >> > > data >> > > >>>>>>> 34.55 >> > > >>>>>>> vm-111-disk-0 pve Vwi-aotz-- >> 100.00g >> > > data >> > > >>>>>>> 79.03 >> > > >>>>>>> vm-112-disk-0 pve Vwi-aotz-- >> 120.00g >> > > data >> > > >>>>>>> 93.78 >> > > >>>>>>> vm-113-disk-0 pve Vwi-aotz-- >> 50.00g >> > > data >> > > >>>>>>> 65.42 >> > > >>>>>>> vm-113-state-antes_balloon pve Vwi-a-tz-- >> 16.49g >> > > data >> > > >>>>>>> 43.64 >> > > >>>>>>> vm-114-disk-0 pve Vwi-aotz-- >> 120.00g >> > > data >> > > >>>>>>> 100.00 >> > > >>>>>>> vm-115-disk-0 pve Vwi-a-tz-- >> 100.00g >> > > data >> > > >>>>>>> 70.28 >> > > >>>>>>> vm-115-disk-1 pve Vwi-a-tz-- >> 50.00g >> > > data >> > > >>>>>>> 0.00 >> > > >>>>>>> vm-116-disk-0 pve Vwi-aotz-- >> 100.00g >> > > data >> > > >>>>>>> 26.34 >> > > >>>>>>> vm-117-disk-0 pve Vwi-aotz-- >> 100.00g >> > > data >> > > >>>>>>> 100.00 >> > > >>>>>>> vm-118-disk-0 pve Vwi-aotz-- >> 100.00g >> > > data >> > > >>>>>>> 100.00 >> > > >>>>>>> vm-119-disk-0 pve Vwi-aotz-- >> 25.00g >> > > data >> > > >>>>>>> 18.42 >> > > >>>>>>> vm-121-disk-0 pve Vwi-aotz-- >> 100.00g >> > > data >> > > >>>>>>> 23.76 >> > > >>>>>>> vm-122-disk-0 pve Vwi-aotz-- >> 100.00g >> > > data >> > > >>>>>>> 100.00 >> > > >>>>>>> vm-123-disk-0 pve Vwi-aotz-- >> 150.00g >> > > data >> > > >>>>>>> 37.89 >> > > >>>>>>> vm-124-disk-0 pve Vwi-aotz-- >> 100.00g >> > > data >> > > >>>>>>> 30.73 >> > > >>>>>>> vm-125-disk-0 pve Vwi-aotz-- >> 50.00g >> > > data >> > > >>>>>>> 9.02 >> > > >>>>>>> vm-126-disk-0 pve Vwi-aotz-- >> 30.00g >> > > data >> > > >>>>>>> 99.72 >> > > >>>>>>> vm-127-disk-0 pve Vwi-aotz-- >> 50.00g >> > > data >> > > >>>>>>> 10.79 >> > > >>>>>>> vm-129-disk-0 pve Vwi-aotz-- >> 20.00g >> > > data >> > > >>>>>>> 45.04 >> > > >>>>>>> >> > > >>>>>>> cat /etc/pve/storage.cfg >> > > >>>>>>> dir: local >> > > >>>>>>> path /var/lib/vz >> > > >>>>>>> content backup,iso,vztmpl >> > > >>>>>>> >> > > >>>>>>> lvmthin: local-lvm >> > > >>>>>>> thinpool data >> > > >>>>>>> vgname pve >> > > >>>>>>> content rootdir,images >> > > >>>>>>> >> > > >>>>>>> iscsi: iscsi >> > > >>>>>>> portal some-portal >> > > >>>>>>> target some-target >> > > >>>>>>> content images >> > > >>>>>>> >> > > >>>>>>> lvm: iscsi-lvm >> > > >>>>>>> vgname iscsi >> > > >>>>>>> base iscsi:0.0.0.scsi-mpatha >> > > >>>>>>> content rootdir,images >> > > >>>>>>> shared 1 >> > > >>>>>>> >> > > >>>>>>> dir: backup >> > > >>>>>>> path /backup >> > > >>>>>>> content images,rootdir,iso,backup >> > > >>>>>>> maxfiles 3 >> > > >>>>>>> shared 0 >> > > >>>>>>> --- >> > > >>>>>>> Gilberto Nunes Ferreira >> > > >>>>>>> >> > > >>>>>>> (47) 3025-5907 >> > > >>>>>>> (47) 99676-7530 - Whatsapp / Telegram >> > > >>>>>>> >> > > >>>>>>> Skype: gilberto.nunes36 >> > > >>>>>>> >> > > >>>>>>> >> > > >>>>>>> >> > > >>>>>>> >> > > >>>>>>> >> > > >>>>>>> Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza < >> > > >>>> elacunza at binovo.es> >> > > >>>>>>> escreveu: >> > > >>>>>>> >> > > >>>>>>>> Can you send the output for "lvs" and "cat >> > /etc/pve/storage.cfg"? >> > > >>>>>>>> >> > > >>>>>>>> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: >> > > >>>>>>>>> HI all >> > > >>>>>>>>> >> > > >>>>>>>>> Still in trouble with this issue >> > > >>>>>>>>> >> > > >>>>>>>>> cat daemon.log | grep "Feb 12 22:10" >> > > >>>>>>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE >> > replication >> > > >>>>>>>> runner... >> > > >>>>>>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE >> > replication >> > > >>>>>> runner. >> > > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup >> of >> > VM >> > > >> 110 >> > > >>>>>>>> (qemu) >> > > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 >> > > >> failed - >> > > >>>>>> no >> > > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' >> > > >>>>>>>>> >> > > >>>>>>>>> syslog >> > > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup >> of >> > VM >> > > >> 110 >> > > >>>>>>>> (qemu) >> > > >>>>>>>>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: >> > -lock >> > > >>>>>> backup >> > > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110 >> > > >> failed - >> > > >>>>>> no >> > > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' >> > > >>>>>>>>> >> > > >>>>>>>>> pveversion >> > > >>>>>>>>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve) >> > > >>>>>>>>> >> > > >>>>>>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) >> > > >>>>>>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) >> > > >>>>>>>>> pve-kernel-4.15: 5.4-12 >> > > >>>>>>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52 >> > > >>>>>>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36 >> > > >>>>>>>>> corosync: 2.4.4-pve1 >> > > >>>>>>>>> criu: 2.11.1-1~bpo90 >> > > >>>>>>>>> glusterfs-client: 3.8.8-1 >> > > >>>>>>>>> ksm-control-daemon: 1.2-2 >> > > >>>>>>>>> libjs-extjs: 6.0.1-2 >> > > >>>>>>>>> libpve-access-control: 5.1-12 >> > > >>>>>>>>> libpve-apiclient-perl: 2.0-5 >> > > >>>>>>>>> libpve-common-perl: 5.0-56 >> > > >>>>>>>>> libpve-guest-common-perl: 2.0-20 >> > > >>>>>>>>> libpve-http-server-perl: 2.0-14 >> > > >>>>>>>>> libpve-storage-perl: 5.0-44 >> > > >>>>>>>>> libqb0: 1.0.3-1~bpo9 >> > > >>>>>>>>> lvm2: 2.02.168-pve6 >> > > >>>>>>>>> lxc-pve: 3.1.0-7 >> > > >>>>>>>>> lxcfs: 3.0.3-pve1 >> > > >>>>>>>>> novnc-pve: 1.0.0-3 >> > > >>>>>>>>> proxmox-widget-toolkit: 1.0-28 >> > > >>>>>>>>> pve-cluster: 5.0-38 >> > > >>>>>>>>> pve-container: 2.0-41 >> > > >>>>>>>>> pve-docs: 5.4-2 >> > > >>>>>>>>> pve-edk2-firmware: 1.20190312-1 >> > > >>>>>>>>> pve-firewall: 3.0-22 >> > > >>>>>>>>> pve-firmware: 2.0-7 >> > > >>>>>>>>> pve-ha-manager: 2.0-9 >> > > >>>>>>>>> pve-i18n: 1.1-4 >> > > >>>>>>>>> pve-libspice-server1: 0.14.1-2 >> > > >>>>>>>>> pve-qemu-kvm: 3.0.1-4 >> > > >>>>>>>>> pve-xtermjs: 3.12.0-1 >> > > >>>>>>>>> qemu-server: 5.0-55 >> > > >>>>>>>>> smartmontools: 6.5+svn4324-1 >> > > >>>>>>>>> spiceterm: 3.0-5 >> > > >>>>>>>>> vncterm: 1.5-3 >> > > >>>>>>>>> zfsutils-linux: 0.7.13-pve1~bpo2 >> > > >>>>>>>>> >> > > >>>>>>>>> >> > > >>>>>>>>> Some help??? Sould I upgrade the server to 6.x?? >> > > >>>>>>>>> >> > > >>>>>>>>> Thanks >> > > >>>>>>>>> >> > > >>>>>>>>> --- >> > > >>>>>>>>> Gilberto Nunes Ferreira >> > > >>>>>>>>> >> > > >>>>>>>>> (47) 3025-5907 >> > > >>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram >> > > >>>>>>>>> >> > > >>>>>>>>> Skype: gilberto.nunes36 >> > > >>>>>>>>> >> > > >>>>>>>>> >> > > >>>>>>>>> >> > > >>>>>>>>> >> > > >>>>>>>>> >> > > >>>>>>>>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < >> > > >>>>>>>>> gilberto.nunes32 at gmail.com> escreveu: >> > > >>>>>>>>> >> > > >>>>>>>>>> Hi there >> > > >>>>>>>>>> >> > > >>>>>>>>>> I got a strage error last night. Vzdump complain about the >> > > >>>>>>>>>> disk no exist or lvm volume in this case but the volume >> exist, >> > > >>>> indeed! >> > > >>>>>>>>>> In the morning I have do a manually backup and it's working >> > > >> fine... >> > > >>>>>>>>>> Any advice? >> > > >>>>>>>>>> >> > > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 >> > (qemu) >> > > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: status = running >> > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup >> > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: >> cliente-V-112-IP-165 >> > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' >> > > >>>>>>>> 'local-lvm:vm-112-disk-0' 120G >> > > >>>>>>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - >> no >> > > such >> > > >>>>>>>> volume 'local-lvm:vm-112-disk-0' >> > > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 >> > (qemu) >> > > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: status = running >> > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup >> > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 >> > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' >> > > >>>>>>>> 'local-lvm:vm-116-disk-0' 100G >> > > >>>>>>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - >> no >> > > such >> > > >>>>>>>> volume 'local-lvm:vm-116-disk-0' >> > > >>>>>>>>>> --- >> > > >>>>>>>>>> Gilberto Nunes Ferreira >> > > >>>>>>>>>> >> > > >>>>>>>>>> (47) 3025-5907 >> > > >>>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram >> > > >>>>>>>>>> >> > > >>>>>>>>>> Skype: gilberto.nunes36 >> > > >>>>>>>>>> >> > > >>>>>>>>>> >> > > >>>>>>>>>> >> > > >>>>>>>>>> >> > > >>>>>>>>> _______________________________________________ >> > > >>>>>>>>> pve-user mailing list >> > > >>>>>>>>> pve-user at pve.proxmox.com >> > > >>>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > >>>>>>>> -- >> > > >>>>>>>> Zuzendari Teknikoa / Director T?cnico >> > > >>>>>>>> Binovo IT Human Project, S.L. >> > > >>>>>>>> Telf. 943569206 >> > > >>>>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun >> > > (Gipuzkoa) >> > > >>>>>>>> www.binovo.es >> > > >>>>>>>> >> > > >>>>>>>> _______________________________________________ >> > > >>>>>>>> pve-user mailing list >> > > >>>>>>>> pve-user at pve.proxmox.com >> > > >>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > >>>>>>>> >> > > >>>>>>> _______________________________________________ >> > > >>>>>>> pve-user mailing list >> > > >>>>>>> pve-user at pve.proxmox.com >> > > >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > >>>>>> -- >> > > >>>>>> Zuzendari Teknikoa / Director T?cnico >> > > >>>>>> Binovo IT Human Project, S.L. >> > > >>>>>> Telf. 943569206 >> > > >>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun >> > (Gipuzkoa) >> > > >>>>>> www.binovo.es >> > > >>>>>> >> > > >>>>>> _______________________________________________ >> > > >>>>>> pve-user mailing list >> > > >>>>>> pve-user at pve.proxmox.com >> > > >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > >>>>>> >> > > >>>>> _______________________________________________ >> > > >>>>> pve-user mailing list >> > > >>>>> pve-user at pve.proxmox.com >> > > >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > >>>> -- >> > > >>>> Zuzendari Teknikoa / Director T?cnico >> > > >>>> Binovo IT Human Project, S.L. >> > > >>>> Telf. 943569206 >> > > >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun >> (Gipuzkoa) >> > > >>>> www.binovo.es >> > > >>>> >> > > >>>> _______________________________________________ >> > > >>>> pve-user mailing list >> > > >>>> pve-user at pve.proxmox.com >> > > >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > >>>> >> > > >>> _______________________________________________ >> > > >>> pve-user mailing list >> > > >>> pve-user at pve.proxmox.com >> > > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > >> >> > > >> -- >> > > >> Zuzendari Teknikoa / Director T?cnico >> > > >> Binovo IT Human Project, S.L. >> > > >> Telf. 943569206 >> > > >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun >> (Gipuzkoa) >> > > >> www.binovo.es >> > > >> >> > > >> _______________________________________________ >> > > >> pve-user mailing list >> > > >> pve-user at pve.proxmox.com >> > > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > >> >> > > > _______________________________________________ >> > > > pve-user mailing list >> > > > pve-user at pve.proxmox.com >> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > _______________________________________________ >> > > pve-user mailing list >> > > pve-user at pve.proxmox.com >> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > >> > _______________________________________________ >> > pve-user mailing list >> > pve-user at pve.proxmox.com >> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > From a.antreich at proxmox.com Thu Feb 20 17:28:20 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Thu, 20 Feb 2020 17:28:20 +0100 Subject: [PVE-User] How to restart ceph-mon? In-Reply-To: <20200220141401.GH2769@sv.lnf.it> References: <20200219103906.GC6251@sv.lnf.it> <20200219105503.GC2117767@dona.proxmox.com> <20200219110544.GD6251@sv.lnf.it> <20200220083329.GD2117767@dona.proxmox.com> <20200220141401.GH2769@sv.lnf.it> Message-ID: <20200220162820.GA20050@dona.proxmox.com> On Thu, Feb 20, 2020 at 03:14:01PM +0100, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > > it is time to kill it? > > I suppose you did that already. Did it work? > > No, i've done just now. But yes, a 'kill' worked. Monitor restarted. > > Only a little note. On boot, monitor run with this cmdline: > > root at deadpool:~# ps aux | grep ceph-[m]on > ceph 2402 0.6 2.1 808428 356540 ? Ssl feb18 16:43 /usr/bin/ceph-mon -i 4 --pid-file /var/run/ceph/mon.4.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph > > on 'systemctl start ceph-mon@.service', monitor run with: > > root at hulk:~# ps aux | grep ceph-[m]on > ceph 3276772 36.0 0.7 768580 357484 ? Ssl 15:05 0:04 /usr/bin/ceph-mon -f --cluster ceph --id 2 --setuser ceph --setgroup ceph > > eg, without '--pid-file /var/run/ceph/mon.4.pid -c /etc/ceph/ceph.conf' > and with a different cmdline. Yes, that looks strange. But as said before, it is deprecated to use IDs. Best destroy and re-create the MON one-by-one. The default command will create them with the hostname as ID. Then this phenomenon should disappear as well. -- Cheers, Alwin From damkobaranov at gmail.com Thu Feb 20 17:38:52 2020 From: damkobaranov at gmail.com (Demetri A. Mkobaranov) Date: Thu, 20 Feb 2020 17:38:52 +0100 Subject: [PVE-User] IPv6 disabled - status update error: iptables_restore_cmdlist Message-ID: I disabled IPv6 from my proxmox and, as already reported by others, I very often get this message in syslog: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information I suppose that I need to use ip6tables-save > path/file (or touch path/file considering that there are no rules ??) so that ip6tables-restore can work and pve-firewall won't output the warning. Right? Problem is that I don't know "path/file" How do you deal with this? Cheers! Demetri From leandro at tecnetmza.com.ar Thu Feb 20 17:54:57 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Thu, 20 Feb 2020 13:54:57 -0300 Subject: [PVE-User] PVE storage layout question. Message-ID: Hi guys. I have a very old proxmox version (4.X). There I have only one partition where I store both vms and isos images. Now, I installed pve last version. After install default process finished I can see two partitions created. (LVM and LVM-thin) After reading in storage documentation , it is not clear for me what the difference is between them. Can I use only one partition (like in my old proxmox ver4) ? Any advice regarding storage layout good practice would be appreciated. Currently I have a 5TB Raid5 storage available. Regards, Leandro. From s.ivanov at proxmox.com Thu Feb 20 19:29:21 2020 From: s.ivanov at proxmox.com (Stoiko Ivanov) Date: Thu, 20 Feb 2020 19:29:21 +0100 Subject: [PVE-User] PVE 6: postfix in a debian buster container, 'satellite' does not work. In-Reply-To: <20200219085537.GB6251@sv.lnf.it> References: <20200219085537.GB6251@sv.lnf.it> Message-ID: <20200220192921.4c284cc3@rosa.proxmox.com> Hi, Thanks for reporting this! On Wed, 19 Feb 2020 09:55:37 +0100 Marco Gaiarin wrote: > I'm not sue this is a debian/posfix or a consequences of packaging it > in a container, so i try to ask here. the issue is from the postfix main.cf added to the container image - it does not contain the 'compatibility_level=2' setting - and without that and the minimal shipped config - the system refuses mail (because it would become an open relay otherwise) I sent a patch to the pve-devel list for discussion: https://pve.proxmox.com/pipermail/pve-devel/2020-February/041879.html cheers, stoiko > > I've setup a new container 'debian buster', and configure inside > postfix a 'satellite system' to send all email to my internal SMTP > server. > > A local inject work: > > echo prova | mail -s test root > > Feb 17 16:35:29 vbaculalpb postfix/pickup[4047]: 318E18A4A: uid=0 from= > Feb 17 16:35:29 vbaculalpb postfix/cleanup[4104]: 318E18A4A: message-id=<20200217153529.318E18A4A at vbaculalpb.localdomain> > Feb 17 16:35:29 vbaculalpb postfix/qmgr[4048]: 318E18A4A: from=, size=408, nrcpt=1 (queue active) > Feb 17 16:35:29 vbaculalpb postfix/smtp[4106]: 318E18A4A: to=, orig_to=, relay=mail.lilliput.linux.it[192.168.1.1]:25, delay=0.21, delays=0.02/0.01/0.01/0.16, dsn=2.0.0, status=sent (250 OK id=1j3iQj-0003Yc-7t) > Feb 17 16:35:29 vbaculalpb postfix/qmgr[4048]: 318E18A4A: removed > > > bu some services, that use 'localhost' as SMTP server, no: > > Feb 17 16:32:18 vbaculalpb postfix/master[383]: warning: process /usr/lib/postfix/sbin/smtpd pid 3687 exit status 1 > Feb 17 16:32:18 vbaculalpb postfix/master[383]: warning: /usr/lib/postfix/sbin/smtpd: bad command startup -- throttling > Feb 17 16:33:18 vbaculalpb postfix/smtpd[3688]: fatal: in parameter smtpd_relay_restrictions or smtpd_recipient_restrictions, specify at least one working instance of: reject_unauth_destination, defer_unauth_destination, reject, defer, defer_if_permit or check_relay_domains > > googling a bit lead me to: > > postconf compatibility_level=2 > postfix reload > > and after that 'satellite system' work as usual. > From smr at kmi.com Thu Feb 20 21:05:33 2020 From: smr at kmi.com (Stefan M. Radman) Date: Thu, 20 Feb 2020 20:05:33 +0000 Subject: [PVE-User] IPv6 disabled - status update error: iptables_restore_cmdlist In-Reply-To: References: Message-ID: Try /etc/sysconfig/ip6tables Stefan # fgrep -A1 /etc/sysconfig/ip6tables /etc/sysconfig/ip6tables-config # Saves all firewall rules to /etc/sysconfig/ip6tables if firewall gets stopped # (e.g. on system shutdown). -- # Saves all firewall rules to /etc/sysconfig/ip6tables if firewall gets # restarted. -- # Save counters for rules and chains to /etc/sysconfig/ip6tables if # 'service ip6tables save' is called or on stop or restart if SAVE_ON_STOP or On Feb 20, 2020, at 17:38, Demetri A. Mkobaranov > wrote: I disabled IPv6 from my proxmox and, as already reported by others, I very often get this message in syslog: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information I suppose that I need to use ip6tables-save > path/file (or touch path/file considering that there are no rules ??) so that ip6tables-restore can work and pve-firewall won't output the warning. Right? Problem is that I don't know "path/file" How do you deal with this? Cheers! Demetri _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpve.proxmox.com%2Fcgi-bin%2Fmailman%2Flistinfo%2Fpve-user&data=02%7C01%7Csmr%40kmi.com%7C9679309a22e74f36fabb08d7b62386c9%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C0%7C637178136026751973&sdata=qAOC2Ip7TMTkqPeH8J%2BQ6H0XzrQcov4vyOUtocVWUXw%3D&reserved=0 CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From proxmox-user at mattern.org Thu Feb 20 21:22:16 2020 From: proxmox-user at mattern.org (proxmox-user at mattern.org) Date: Thu, 20 Feb 2020 21:22:16 +0100 Subject: [PVE-User] PVE storage layout question. In-Reply-To: References: Message-ID: <8d23f135-d22c-d7b9-e8e4-8c5e21b15616@mattern.org> Hi, Am 20.02.20 um 17:54 schrieb Leandro Roggerone: > Hi guys. > I have a very old proxmox version (4.X). > There I have only one partition where I store both vms and isos images. > Now, I installed pve last version. Did you reinstall your server? You can update. It's really easy - see the wiki. > After install default process finished I can see two partitions created. > (LVM and LVM-thin) The main difference (on a single server) between LVM and LVM-thin in proxmox is that you can't use snapshots with "normal" LVM volumes. You have to use LVM-Thin. > After reading in storage documentation , it is not clear for me what the > difference is between them. > Can I use only one partition (like in my old proxmox ver4) ? Proxmox uses LVM is a blocklevel storage. You can't use is for files directly (https://pve.proxmox.com/pve-docs/chapter-pvesm.html). So you need another partition/LVM Volume with a filesystem for your ISOs. LVM is really flexible. You can create a logical volume with a filesystem with the shell and mount it to use it for ISOs, Backup or whatever. > Any advice regarding storage layout good practice would be appreciated. Depends on your usage. If possible do some fio tests with different RAID levels, stripe sizes, cache settings... > Currently I have a 5TB Raid5 storage available. > > Regards, > Leandro. > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From humbertos at ifsc.edu.br Fri Feb 21 12:42:27 2020 From: humbertos at ifsc.edu.br (Humberto Jose De Sousa) Date: Fri, 21 Feb 2020 08:42:27 -0300 (BRT) Subject: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 In-Reply-To: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> References: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> Message-ID: <2125216204.4207080.1582285347306.JavaMail.zimbra@ifsc.edu.br> Hi. I has many problem with IPv6. Sometimes the IPv6 on VMs stop works. Sometimes IPv6 on host stop too. It's happen only on proxmox cluster (vms and hosts). Others devices don't affected. Here the default route IPv6 is lost. This week I disabled IPv6 on DNS and I'm using only IPv4. Pehaps this cold be your problem too. Humberto. De: "Eneko Lacunza" Para: "PVE User List" Enviadas: Quinta-feira, 20 de fevereiro de 2020 6:05:12 Assunto: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 Hi all, On february 11th we upgraded a PVE 5.3 cluster to 5.4, then to 6.1 . This is an hyperconverged cluster with 3 servers, redundant network, Ceph with two storage pools, one HDD based and the other SSD based: Each server consists of: - Dell R530 - 1xXeon E5-2620 8c/16t 2.1Ghz - 64GB RAM - 4x1Gbit ethernet (2 bonds) - 2x10Gbit ethernet (1 bond) - 1xIntel S4500 480GB - System + Bluestore DB for HDDs - 1xIntel S4500 480GB - Bluestore OSD - 4x1TB HDD - Bluestore OSD (with 30GB db on SSD) There are two Dell n1224T switches, each bond has one interface to each switch. Bonds are Active/passive, all active interfaces are on the same switch. vmbr0 is on a 2x1Gbit bond0 Ceph public and private are on 2x10Gbit bond2 Backup network is IPv6 on 2x1Gbit bond1, to a Synology NAS. SSD disk wearout is at 0%. It seems that since the upgrade, were're experiencing network connectivity issues in the night, during the backup window. We think that the backups may be the issue; until yesterday backups were done over vmbr0 with IPv4; as they nearly saturated the 1Gbit link, we changed the network and storage configuration so that backup NAS access was done over bond1, as it wasn't used previously. We're using IPv6 now because Synology can't configure two IPv4 on a bond from the GUI. But it seems the issue has happened again tonight (SQL Server connection drop). VM has network connectivity on the morning, so it isn't a permanent problem. We tried running the main VM backup yesterday morning, but couldn't reproduce the issue, although during regular backup all 3 nodes are doing backups and in the test we only performed the backup of the only VM storaged on SSD pool. This VM has 8vcores, 10GB of RAM, one disk Virtio scsi0 300GB cache=writeback, network is e1000. Backup reports: NFO: status: 100% (322122547200/322122547200), sparse 22% (72698785792), duration 2416, read/write 3650/0 MB/s INFO: transferred 322122 MB in 2416 seconds (133 MB/s) And peaks like: INFO: status: 70% (225552891904/322122547200), sparse 3% (12228284416), duration 2065, read/write 181/104 MB/s INFO: status: 71% (228727980032/322122547200), sparse 3% (12228317184), duration 2091, read/write 122/122 MB/s INFO: status: 72% (232054063104/322122547200), sparse 3% (12228349952), duration 2118, read/write 123/123 MB/s INFO: status: 73% (235237539840/322122547200), sparse 3% (12230103040), duration 2147, read/write 109/109 MB/s INFO: status: 74% (238500708352/322122547200), sparse 3% (12237438976), duration 2177, read/write 108/108 MB/s Also, during backup we see the following messages in syslog of the physical node: Feb 20 00:00:18 sotllo pve-ha-lrm[3930696]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - got timeout Feb 20 00:00:18 sotllo pve-ha-lrm[3930696]: VM 103 qmp command 'query-status' failed - got timeout#012 Feb 20 00:00:28 sotllo pve-ha-lrm[3930759]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries Feb 20 00:00:28 sotllo pve-ha-lrm[3930759]: VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries#012 Feb 20 00:00:38 sotllo pve-ha-lrm[3930822]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries Feb 20 00:00:38 sotllo pve-ha-lrm[3930822]: VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries#012 [...] Feb 20 00:40:38 sotllo pve-ha-lrm[3948846]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - got timeout Feb 20 00:40:38 sotllo pve-ha-lrm[3948846]: VM 103 qmp command 'query-status' failed - got timeout#012 Feb 20 00:41:28 sotllo pve-ha-lrm[3949193]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - got timeout Feb 20 00:41:28 sotllo pve-ha-lrm[3949193]: VM 103 qmp command 'query-status' failed - got timeout#012 So it seems backup is having a big impact on the VM. This is only seen for 3 of the 4 VMs in HA, but for the other VMs it is just logged twice, and not everyday (there're on the HDD pool). For this VM there are lots of logs everyday. CPU during backup is low in the physical server, about 1.5-3.5 max load and 10% max use. Although it has been working fine until now, maybe e1000 emulation could be the issue? We'll have to schedule downtime but can try to change to virtio. Any other ideas about what could be producing the issue? Thanks a lot for reading through here!! All three nodes have the same versions: root at sotllo:~# pveversion -v proxmox-ve: 6.1-2 (running kernel: 5.3.13-3-pve) pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e) pve-kernel-5.3: 6.1-3 pve-kernel-helper: 6.1-3 pve-kernel-4.15: 5.4-12 pve-kernel-5.3.13-3-pve: 5.3.13-3 pve-kernel-4.15.18-24-pve: 4.15.18-52 pve-kernel-4.15.18-10-pve: 4.15.18-32 pve-kernel-4.13.13-5-pve: 4.13.13-38 pve-kernel-4.13.13-2-pve: 4.13.13-33 ceph: 14.2.6-pve1 ceph-fuse: 14.2.6-pve1 corosync: 3.0.2-pve4 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35+pve1 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.13-pve1 libpve-access-control: 6.0-6 libpve-apiclient-perl: 3.0-2 libpve-common-perl: 6.0-11 libpve-guest-common-perl: 3.0-3 libpve-http-server-perl: 3.0-4 libpve-storage-perl: 6.1-4 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 3.2.1-1 lxcfs: 3.0.3-pve60 novnc-pve: 1.1.0-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.1-3 pve-cluster: 6.1-4 pve-container: 3.0-19 pve-docs: 6.1-4 pve-edk2-firmware: 2.20191127-1 pve-firewall: 4.0-10 pve-firmware: 3.0-4 pve-ha-manager: 3.0-8 pve-i18n: 2.0-4 pve-qemu-kvm: 4.1.1-2 pve-xtermjs: 4.3.0-1 qemu-server: 6.1-5 smartmontools: 7.1-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.3-pve1 -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user Humberto Jos? de Sousa Analista de TI CTIC C?mpus S?o Jos? (48) 3381-2821 Instituto Federal de Santa Catarina - C?mpus S?o Jos? R. Jos? Lino Kretzer, 608, Praia Comprida, S?o Jos? / SC - CEP: 88103-310 www.sj.ifsc.edu.br ----- Mensagem original ----- De: "Eneko Lacunza" Para: "PVE User List" Enviadas: Quinta-feira, 20 de fevereiro de 2020 6:05:12 Assunto: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 Hi all, On february 11th we upgraded a PVE 5.3 cluster to 5.4, then to 6.1 . This is an hyperconverged cluster with 3 servers, redundant network, Ceph with two storage pools, one HDD based and the other SSD based: Each server consists of: - Dell R530 - 1xXeon E5-2620 8c/16t 2.1Ghz - 64GB RAM - 4x1Gbit ethernet (2 bonds) - 2x10Gbit ethernet (1 bond) - 1xIntel S4500 480GB - System + Bluestore DB for HDDs - 1xIntel S4500 480GB - Bluestore OSD - 4x1TB HDD - Bluestore OSD (with 30GB db on SSD) There are two Dell n1224T switches, each bond has one interface to each switch. Bonds are Active/passive, all active interfaces are on the same switch. vmbr0 is on a 2x1Gbit bond0 Ceph public and private are on 2x10Gbit bond2 Backup network is IPv6 on 2x1Gbit bond1, to a Synology NAS. SSD disk wearout is at 0%. It seems that since the upgrade, were're experiencing network connectivity issues in the night, during the backup window. We think that the backups may be the issue; until yesterday backups were done over vmbr0 with IPv4; as they nearly saturated the 1Gbit link, we changed the network and storage configuration so that backup NAS access was done over bond1, as it wasn't used previously. We're using IPv6 now because Synology can't configure two IPv4 on a bond from the GUI. But it seems the issue has happened again tonight (SQL Server connection drop). VM has network connectivity on the morning, so it isn't a permanent problem. We tried running the main VM backup yesterday morning, but couldn't reproduce the issue, although during regular backup all 3 nodes are doing backups and in the test we only performed the backup of the only VM storaged on SSD pool. This VM has 8vcores, 10GB of RAM, one disk Virtio scsi0 300GB cache=writeback, network is e1000. Backup reports: NFO: status: 100% (322122547200/322122547200), sparse 22% (72698785792), duration 2416, read/write 3650/0 MB/s INFO: transferred 322122 MB in 2416 seconds (133 MB/s) And peaks like: INFO: status: 70% (225552891904/322122547200), sparse 3% (12228284416), duration 2065, read/write 181/104 MB/s INFO: status: 71% (228727980032/322122547200), sparse 3% (12228317184), duration 2091, read/write 122/122 MB/s INFO: status: 72% (232054063104/322122547200), sparse 3% (12228349952), duration 2118, read/write 123/123 MB/s INFO: status: 73% (235237539840/322122547200), sparse 3% (12230103040), duration 2147, read/write 109/109 MB/s INFO: status: 74% (238500708352/322122547200), sparse 3% (12237438976), duration 2177, read/write 108/108 MB/s Also, during backup we see the following messages in syslog of the physical node: Feb 20 00:00:18 sotllo pve-ha-lrm[3930696]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - got timeout Feb 20 00:00:18 sotllo pve-ha-lrm[3930696]: VM 103 qmp command 'query-status' failed - got timeout#012 Feb 20 00:00:28 sotllo pve-ha-lrm[3930759]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries Feb 20 00:00:28 sotllo pve-ha-lrm[3930759]: VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries#012 Feb 20 00:00:38 sotllo pve-ha-lrm[3930822]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries Feb 20 00:00:38 sotllo pve-ha-lrm[3930822]: VM 103 qmp command 'query-status' failed - unable to connect to VM 103 qmp socket - timeout after 31 retries#012 [...] Feb 20 00:40:38 sotllo pve-ha-lrm[3948846]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - got timeout Feb 20 00:40:38 sotllo pve-ha-lrm[3948846]: VM 103 qmp command 'query-status' failed - got timeout#012 Feb 20 00:41:28 sotllo pve-ha-lrm[3949193]: VM 103 qmp command failed - VM 103 qmp command 'query-status' failed - got timeout Feb 20 00:41:28 sotllo pve-ha-lrm[3949193]: VM 103 qmp command 'query-status' failed - got timeout#012 So it seems backup is having a big impact on the VM. This is only seen for 3 of the 4 VMs in HA, but for the other VMs it is just logged twice, and not everyday (there're on the HDD pool). For this VM there are lots of logs everyday. CPU during backup is low in the physical server, about 1.5-3.5 max load and 10% max use. Although it has been working fine until now, maybe e1000 emulation could be the issue? We'll have to schedule downtime but can try to change to virtio. Any other ideas about what could be producing the issue? Thanks a lot for reading through here!! All three nodes have the same versions: root at sotllo:~# pveversion -v proxmox-ve: 6.1-2 (running kernel: 5.3.13-3-pve) pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e) pve-kernel-5.3: 6.1-3 pve-kernel-helper: 6.1-3 pve-kernel-4.15: 5.4-12 pve-kernel-5.3.13-3-pve: 5.3.13-3 pve-kernel-4.15.18-24-pve: 4.15.18-52 pve-kernel-4.15.18-10-pve: 4.15.18-32 pve-kernel-4.13.13-5-pve: 4.13.13-38 pve-kernel-4.13.13-2-pve: 4.13.13-33 ceph: 14.2.6-pve1 ceph-fuse: 14.2.6-pve1 corosync: 3.0.2-pve4 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35+pve1 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.13-pve1 libpve-access-control: 6.0-6 libpve-apiclient-perl: 3.0-2 libpve-common-perl: 6.0-11 libpve-guest-common-perl: 3.0-3 libpve-http-server-perl: 3.0-4 libpve-storage-perl: 6.1-4 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 3.2.1-1 lxcfs: 3.0.3-pve60 novnc-pve: 1.1.0-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.1-3 pve-cluster: 6.1-4 pve-container: 3.0-19 pve-docs: 6.1-4 pve-edk2-firmware: 2.20191127-1 pve-firewall: 4.0-10 pve-firmware: 3.0-4 pve-ha-manager: 3.0-8 pve-i18n: 2.0-4 pve-qemu-kvm: 4.1.1-2 pve-xtermjs: 4.3.0-1 qemu-server: 6.1-5 smartmontools: 7.1-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.3-pve1 -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From leandro at tecnetmza.com.ar Fri Feb 21 13:51:02 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Fri, 21 Feb 2020 09:51:02 -0300 Subject: [PVE-User] PVE storage layout question. In-Reply-To: <8d23f135-d22c-d7b9-e8e4-8c5e21b15616@mattern.org> References: <8d23f135-d22c-d7b9-e8e4-8c5e21b15616@mattern.org> Message-ID: Hi guys , thanks for the response. You mean that both partitions are usefulls and are needed. Mi concern is to create a layout at the install moment and after some time realize that some partition is too big ot to small. So , my idea is: Designate 1Tb for LVM. Designate 2TB for LVM-thin. Left 2TB for future use: So, In the future is it possible to modify partitions size without any risk? Other: Can I add a Raid 1 with 2 ssd hard disk on remaining empty slots later or do I need to have them installed at the proxmox install moment? Thanks!!!! Leandro. El jue., 20 feb. 2020 a las 17:22, escribi?: > Hi, > > Am 20.02.20 um 17:54 schrieb Leandro Roggerone: > > Hi guys. > > I have a very old proxmox version (4.X). > > There I have only one partition where I store both vms and isos images. > > Now, I installed pve last version. > Did you reinstall your server? You can update. It's really easy - see > the wiki. > > After install default process finished I can see two partitions created. > > (LVM and LVM-thin) > The main difference (on a single server) between LVM and LVM-thin in > proxmox is that you can't use snapshots with "normal" LVM volumes. You > have to use LVM-Thin. > > After reading in storage documentation , it is not clear for me what the > > difference is between them. > > Can I use only one partition (like in my old proxmox ver4) ? > Proxmox uses LVM is a blocklevel storage. You can't use is for files > directly (https://pve.proxmox.com/pve-docs/chapter-pvesm.html). So you > need another partition/LVM Volume with a filesystem for your ISOs. LVM > is really flexible. You can create a logical volume with a filesystem > with the shell and mount it to use it for ISOs, Backup or whatever. > > Any advice regarding storage layout good practice would be appreciated. > Depends on your usage. If possible do some fio tests with different RAID > levels, stripe sizes, cache settings... > > Currently I have a 5TB Raid5 storage available. > > > > Regards, > > Leandro. > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From elacunza at binovo.es Fri Feb 21 14:09:12 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 21 Feb 2020 14:09:12 +0100 Subject: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 In-Reply-To: References: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> Message-ID: <0b136599-7005-3081-ecd1-dfb3e6dc8027@binovo.es> Hi Humberto, We aren't using IPv6 for VM network, that can't be the issue. But thanks for the suggestion! :-) Eneko El 21/2/20 a las 12:42, Humberto Jose De Sousa via pve-user escribi?: > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From damkobaranov at gmail.com Fri Feb 21 15:15:35 2020 From: damkobaranov at gmail.com (Demetri A. Mkobaranov) Date: Fri, 21 Feb 2020 15:15:35 +0100 Subject: [PVE-User] IPv6 disabled - status update error: iptables_restore_cmdlist In-Reply-To: References: Message-ID: <418ecb57-1436-9e22-340c-1f3bb01ec728@gmail.com> On 2/20/20 9:05 PM, Stefan M. Radman via pve-user wrote: > Try /etc/sysconfig/ip6tables > > Stefan > > # fgrep -A1 /etc/sysconfig/ip6tables /etc/sysconfig/ip6tables-config > # Saves all firewall rules to /etc/sysconfig/ip6tables if firewall gets stopped > # (e.g. on system shutdown). > -- > # Saves all firewall rules to /etc/sysconfig/ip6tables if firewall gets > # restarted. > -- > # Save counters for rules and chains to /etc/sysconfig/ip6tables if > # 'service ip6tables save' is called or on stop or restart if SAVE_ON_STOP or Thank you Stefan It seems like that path might work for RPM based distros. I'm on Debian. I tried creating the folder and an empty file (just in case) but it didn't work as expected. I didn't find a solution yet but I'm writing here what I've done so far as future reference: accordingly to Debian's instructions https://wiki.debian.org/iptables: 1. I created etc/ip6tables.up.rules, restarted pve-firewall (which in my case is disabled because I use ferm) -> no difference, still same logging 2. I installed iptables-persistent package, let the postinst script create /etc/iptables/rules.v6 but it succeeded only in creating /etc/iptables/rules.v4 (probably because ipv6 is disabled). So I touched it, restarted pve-firewall -> no difference, still same logging Any tip is appreciated Demetri From gaio at sv.lnf.it Fri Feb 21 15:29:08 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Fri, 21 Feb 2020 15:29:08 +0100 Subject: [PVE-User] How to restart ceph-mon? In-Reply-To: <20200220162820.GA20050@dona.proxmox.com> References: <20200219103906.GC6251@sv.lnf.it> <20200219105503.GC2117767@dona.proxmox.com> <20200219110544.GD6251@sv.lnf.it> <20200220083329.GD2117767@dona.proxmox.com> <20200220141401.GH2769@sv.lnf.it> <20200220162820.GA20050@dona.proxmox.com> Message-ID: <20200221142908.GD2911@sv.lnf.it> Mandi! Alwin Antreich In chel di` si favelave... > Yes, that looks strange. But as said before, it is deprecated to use > IDs. Best destroy and re-create the MON one-by-one. The default command > will create them with the hostname as ID. Then this phenomenon should > disappear as well. Done, via web interface, with a little glitch. I've stopped and dropped the monitor, but these don't stop (and drop) the manager, and so creating a new mon va webinterface lead to: Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon at hulk.service -> /lib/systemd/system/ceph-mon at .service. INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing' INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing' INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing' INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' INFO:ceph-create-keys:Key exists already: /etc/ceph/ceph.client.admin.keyring INFO:ceph-create-keys:Key exists already: /var/lib/ceph/bootstrap-osd/ceph.keyring INFO:ceph-create-keys:Key exists already: /var/lib/ceph/bootstrap-rgw/ceph.keyring INFO:ceph-create-keys:Key exists already: /var/lib/ceph/bootstrap-mds/ceph.keyring INFO:ceph-create-keys:Talking to monitor... TASK ERROR: ceph manager directory '/var/lib/ceph/mgr/ceph-hulk' already exists probably because the task try also to fire up a mgr, that was just created. Anyway, nothing changed. On a rebooted node: root at capitanmarvel:~# ps aux | grep ceph[-]mon ceph 2725 0.5 0.2 522224 98428 ? Ssl feb18 21:14 /usr/bin/ceph-mon -i capitanmarvel --pid-file /var/run/ceph/mon.capitanmarvel.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph on a node when i do a 'systemctl restart ceph-mgr@.service': root at hulk:~# ps aux | grep ceph[-]mon ceph 4166380 0.8 0.1 466648 55676 ? Ssl 15:19 0:03 /usr/bin/ceph-mon -f --cluster ceph --id hulk --setuser ceph --setgroup ceph All cluster is healthy and works as expected, anyway: root at hulk:~# ceph -s cluster: id: 8794c124-c2ec-4e81-8631-742992159bd6 health: HEALTH_OK services: mon: 5 daemons, quorum blackpanther,capitanmarvel,deadpool,hulk,thor mgr: blackpanther(active), standbys: capitanmarvel, deadpool, thor, hulk osd: 12 osds: 12 up, 12 in -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From smr at kmi.com Fri Feb 21 15:53:30 2020 From: smr at kmi.com (Stefan M. Radman) Date: Fri, 21 Feb 2020 14:53:30 +0000 Subject: [PVE-User] IPv6 disabled - status update error: iptables_restore_cmdlist In-Reply-To: <418ecb57-1436-9e22-340c-1f3bb01ec728@gmail.com> References: <418ecb57-1436-9e22-340c-1f3bb01ec728@gmail.com> Message-ID: You're right. I was mistakenly checking a CentOS VM instead of the PVE host. Sorry. On Feb 21, 2020, at 15:15, Demetri A. Mkobaranov > wrote: It seems like that path might work for RPM based distros. I'm on Debian. CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From a.antreich at proxmox.com Fri Feb 21 15:55:10 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Fri, 21 Feb 2020 15:55:10 +0100 Subject: [PVE-User] How to restart ceph-mon? In-Reply-To: <20200221142908.GD2911@sv.lnf.it> References: <20200219103906.GC6251@sv.lnf.it> <20200219105503.GC2117767@dona.proxmox.com> <20200219110544.GD6251@sv.lnf.it> <20200220083329.GD2117767@dona.proxmox.com> <20200220141401.GH2769@sv.lnf.it> <20200220162820.GA20050@dona.proxmox.com> <20200221142908.GD2911@sv.lnf.it> Message-ID: <20200221145510.GC20050@dona.proxmox.com> On Fri, Feb 21, 2020 at 03:29:08PM +0100, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > Yes, that looks strange. But as said before, it is deprecated to use > > IDs. Best destroy and re-create the MON one-by-one. The default command > > will create them with the hostname as ID. Then this phenomenon should > > disappear as well. > > Done, via web interface, with a little glitch. > > I've stopped and dropped the monitor, but these don't stop (and drop) > the manager, and so creating a new mon va webinterface lead to: > > Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon at hulk.service -> /lib/systemd/system/ceph-mon at .service. > INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' > INFO:ceph-create-keys:Key exists already: /etc/ceph/ceph.client.admin.keyring > INFO:ceph-create-keys:Key exists already: /var/lib/ceph/bootstrap-osd/ceph.keyring > INFO:ceph-create-keys:Key exists already: /var/lib/ceph/bootstrap-rgw/ceph.keyring > INFO:ceph-create-keys:Key exists already: /var/lib/ceph/bootstrap-mds/ceph.keyring > INFO:ceph-create-keys:Talking to monitor... > TASK ERROR: ceph manager directory '/var/lib/ceph/mgr/ceph-hulk' already exists > > probably because the task try also to fire up a mgr, that was just > created. > > > Anyway, nothing changed. On a rebooted node: > > root at capitanmarvel:~# ps aux | grep ceph[-]mon > ceph 2725 0.5 0.2 522224 98428 ? Ssl feb18 21:14 /usr/bin/ceph-mon -i capitanmarvel --pid-file /var/run/ceph/mon.capitanmarvel.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph > > on a node when i do a 'systemctl restart ceph-mgr@.service': > > root at hulk:~# ps aux | grep ceph[-]mon > ceph 4166380 0.8 0.1 466648 55676 ? Ssl 15:19 0:03 /usr/bin/ceph-mon -f --cluster ceph --id hulk --setuser ceph --setgroup ceph I don't see this in the systemd unit files for Ceph. Also my test systems do not have the pid file either. Maybe this is something from an previous upgrade? systemctl cat ceph-mon@.service You can check with the above command how each Ceph service or target should be started. -- Cheers, Alwin From gaio at sv.lnf.it Fri Feb 21 16:18:23 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Fri, 21 Feb 2020 16:18:23 +0100 Subject: [PVE-User] How to restart ceph-mon? In-Reply-To: <20200221145510.GC20050@dona.proxmox.com> References: <20200219103906.GC6251@sv.lnf.it> <20200219105503.GC2117767@dona.proxmox.com> <20200219110544.GD6251@sv.lnf.it> <20200220083329.GD2117767@dona.proxmox.com> <20200220141401.GH2769@sv.lnf.it> <20200220162820.GA20050@dona.proxmox.com> <20200221142908.GD2911@sv.lnf.it> <20200221145510.GC20050@dona.proxmox.com> Message-ID: <20200221151823.GG2911@sv.lnf.it> Mandi! Alwin Antreich In chel di` si favelave... > > Anyway, nothing changed. On a rebooted node: > > root at capitanmarvel:~# ps aux | grep ceph[-]mon > > ceph 2725 0.5 0.2 522224 98428 ? Ssl feb18 21:14 /usr/bin/ceph-mon -i capitanmarvel --pid-file /var/run/ceph/mon.capitanmarvel.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph > > on a node when i do a 'systemctl restart ceph-mgr@.service': > > root at hulk:~# ps aux | grep ceph[-]mon > > ceph 4166380 0.8 0.1 466648 55676 ? Ssl 15:19 0:03 /usr/bin/ceph-mon -f --cluster ceph --id hulk --setuser ceph --setgroup ceph > I don't see this in the systemd unit files for Ceph. Also my test > systems do not have the pid file either. Maybe this is something from > an previous upgrade? Could be. This cluster (all, indeed) was upgraded from 4.4. > systemctl cat ceph-mon@.service > You can check with the above command how each Ceph service or target > should be started. root at capitanmarvel:~# systemctl cat ceph-mon at capitanmarvel.service # /lib/systemd/system/ceph-mon at .service [Unit] Description=Ceph cluster monitor daemon # According to: # http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget # these can be removed once ceph-mon will dynamically change network # configuration. After=network-online.target local-fs.target time-sync.target Wants=network-online.target local-fs.target time-sync.target PartOf=ceph-mon.target [Service] LimitNOFILE=1048576 LimitNPROC=1048576 EnvironmentFile=-/etc/default/ceph Environment=CLUSTER=ceph ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph ExecReload=/bin/kill -HUP $MAINPID PrivateDevices=yes ProtectHome=true ProtectSystem=full PrivateTmp=true TasksMax=infinity Restart=on-failure StartLimitInterval=30min StartLimitBurst=5 RestartSec=10 [Install] WantedBy=ceph-mon.target # /lib/systemd/system/ceph-mon at .service.d/ceph-after-pve-cluster.conf [Unit] After=pve-cluster.service root at hulk:~# systemctl cat ceph-mon at hulk.service # /lib/systemd/system/ceph-mon at .service [Unit] Description=Ceph cluster monitor daemon # According to: # http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget # these can be removed once ceph-mon will dynamically change network # configuration. After=network-online.target local-fs.target time-sync.target Wants=network-online.target local-fs.target time-sync.target PartOf=ceph-mon.target [Service] LimitNOFILE=1048576 LimitNPROC=1048576 EnvironmentFile=-/etc/default/ceph Environment=CLUSTER=ceph ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph ExecReload=/bin/kill -HUP $MAINPID PrivateDevices=yes ProtectHome=true ProtectSystem=full PrivateTmp=true TasksMax=infinity Restart=on-failure StartLimitInterval=30min StartLimitBurst=5 RestartSec=10 [Install] WantedBy=ceph-mon.target # /lib/systemd/system/ceph-mon at .service.d/ceph-after-pve-cluster.conf [Unit] After=pve-cluster.service seems identical to me... -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From proxmox at elchaka.de Fri Feb 21 23:07:21 2020 From: proxmox at elchaka.de (proxmox at elchaka.de) Date: Fri, 21 Feb 2020 23:07:21 +0100 Subject: [PVE-User] upgrade path to proxmox enterprise repos ? In-Reply-To: References: Message-ID: <1339D3C3-C3A3-4186-9708-2ACD67C6A872@elchaka.de> Hello Rainer, I have done this on a Cluster a few months ago without any issues till today BR Mehmet Am 19. Februar 2020 13:05:00 MEZ schrieb Rainer Krienke : >Hello, > >At the moment I run a proxmox cluster with a seperate ceph cluster as >storage backend. I do not have a proxmox subscription yet. Instead I >use >the community repository to do some upgrades to proxmox. I know that >this repos is not thought for a productive environment. > >My question is if I now start using proxmox deploying productive VMs, >is >there a problem upgrading the proxmox hosts later on with packages from >the enterprise repos? > >I would not expect any problems, but perhaps someone has already done >this and did/did not experience some kind of trouble? > >Thanks for your help >Rainer >-- >Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 >56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312 >Web: http://userpages.uni-koblenz.de/~krienke >PGP: http://userpages.uni-koblenz.de/~krienke/mypgp.html >_______________________________________________ >pve-user mailing list >pve-user at pve.proxmox.com >https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From proxmox at elchaka.de Fri Feb 21 23:07:21 2020 From: proxmox at elchaka.de (proxmox at elchaka.de) Date: Fri, 21 Feb 2020 23:07:21 +0100 Subject: [PVE-User] upgrade path to proxmox enterprise repos ? In-Reply-To: References: Message-ID: <1339D3C3-C3A3-4186-9708-2ACD67C6A872@elchaka.de> Hello Rainer, I have done this on a Cluster a few months ago without any issues till today BR Mehmet Am 19. Februar 2020 13:05:00 MEZ schrieb Rainer Krienke : >Hello, > >At the moment I run a proxmox cluster with a seperate ceph cluster as >storage backend. I do not have a proxmox subscription yet. Instead I >use >the community repository to do some upgrades to proxmox. I know that >this repos is not thought for a productive environment. > >My question is if I now start using proxmox deploying productive VMs, >is >there a problem upgrading the proxmox hosts later on with packages from >the enterprise repos? > >I would not expect any problems, but perhaps someone has already done >this and did/did not experience some kind of trouble? > >Thanks for your help >Rainer >-- >Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 >56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312 >Web: http://userpages.uni-koblenz.de/~krienke >PGP: http://userpages.uni-koblenz.de/~krienke/mypgp.html >_______________________________________________ >pve-user mailing list >pve-user at pve.proxmox.com >https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From proxmox at elchaka.de Fri Feb 21 23:23:11 2020 From: proxmox at elchaka.de (proxmox at elchaka.de) Date: Fri, 21 Feb 2020 23:23:11 +0100 Subject: [PVE-User] pvelocalhost In-Reply-To: References: Message-ID: IIRC this was necessary in pve4/(5?) but not in 6 anymore. Hth Mehmet Am 19. Februar 2020 12:55:33 MEZ schrieb "Stefan M. Radman via pve-user" : >_______________________________________________ >pve-user mailing list >pve-user at pve.proxmox.com >https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From james at panic.com Fri Feb 21 23:43:25 2020 From: james at panic.com (James Moore) Date: Fri, 21 Feb 2020 14:43:25 -0800 Subject: [PVE-User] pvelocalhost In-Reply-To: References: Message-ID: FWIW, I very recently installed PVE 6 for the first time and when my ansible script rewrote /etc/hosts and dropped pvelocalhost my server stopped booting properly until I restored it. > On Feb 21, 2020, at 2:23 PM, proxmox at elchaka.de wrote: > > IIRC this was necessary in pve4/(5?) but not in 6 anymore. > > Hth > Mehmet > > Am 19. Februar 2020 12:55:33 MEZ schrieb "Stefan M. Radman via pve-user" : >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From proxmox-user at mattern.org Sat Feb 22 11:40:48 2020 From: proxmox-user at mattern.org (proxmox-user at mattern.org) Date: Sat, 22 Feb 2020 11:40:48 +0100 Subject: [PVE-User] PVE storage layout question. In-Reply-To: References: <8d23f135-d22c-d7b9-e8e4-8c5e21b15616@mattern.org> Message-ID: <4f98e5eb-9b95-768d-1062-7098120e87a8@mattern.org> > Hi guys , thanks for the response. > You mean that both partitions are usefulls and are needed. > Mi concern is to create a layout at the install moment and after some time > realize that some partition is too big ot to small. > So , my idea is: > Designate 1Tb for LVM. > Designate 2TB for LVM-thin. > Left 2TB for future use: Yes that's possible. > So, In the future is it possible to modify partitions size without any > risk? You can increase and decrease LVM volumes. Increase is easy and almost all filesystems are able to increase size at runtime. Decrease is possible, depends on the filesystem/data you are using within the volume. > Other: > Can I add a Raid 1 with 2 ssd hard disk on remaining empty slots later or > do I need to have them installed at the proxmox install moment? Yes. That's possible. To do that you only have to add the new PV (your Raid 1) to the the existing VG. You can even move to other storage at runtime (pvmove). But again - if you want to use the full feature set of LVM you have to do it on the shell, not with the webinterface. It is essential that you understand how LVM works. (maybe this can help https://www.tecmint.com/create-lvm-storage-in-linux/) > Thanks!!!! > Leandro. If you need detailed help with your setup, feel free to contact me offlist. From smr at kmi.com Sat Feb 22 14:24:08 2020 From: smr at kmi.com (Stefan M. Radman) Date: Sat, 22 Feb 2020 13:24:08 +0000 Subject: [PVE-User] pvelocalhost In-Reply-To: References: Message-ID: <0FFAD759-B378-4630-B57D-49CB7EAAF2E5@kmi.com> James, I removed the pvelocalhost from /etc/hosts of a recently upgraded cluster (5.4=>6.1) and have not seen any negative impact up to now. Several node reboots without any errors. When the server "stopped booting properly", what errors were you seeing? Stefan On Feb 21, 2020, at 23:43, James Moore via pve-user > wrote: From: James Moore > Subject: Re: [PVE-User] pvelocalhost Date: February 21, 2020 at 23:43:25 GMT+1 To: proxmox at elchaka.de Cc: PVE User List > FWIW, I very recently installed PVE 6 for the first time and when my ansible script rewrote /etc/hosts and dropped pvelocalhost my server stopped booting properly until I restored it. On Feb 21, 2020, at 2:23 PM, proxmox at elchaka.de wrote: IIRC this was necessary in pve4/(5?) but not in 6 anymore. Hth Mehmet Am 19. Februar 2020 12:55:33 MEZ schrieb "Stefan M. Radman via pve-user" >: _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. From elacunza at binovo.es Mon Feb 24 10:10:31 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Mon, 24 Feb 2020 10:10:31 +0100 Subject: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 In-Reply-To: References: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> Message-ID: Hi Gianni, El 20/2/20 a las 14:47, Eneko Lacunza escribi?: > We tried running the main VM backup yesterday morning, but couldn't >>> reproduce the issue, although during regular backup all 3 nodes are >>> doing backups and in the test we only performed the backup of the only >>> VM storaged on SSD pool. >>> >>> >>> How about reducing (or scheduling at different times) the backup >>> jobs on >>> each node, at least for testing if the backup is causing the problem. > I'll check with the site admin about this, didn't really think about > this but could help determine if that is the issue, thanks! We have skipped backups for the "server" VM and we had no disconnects this weekend. Tried launching backup manually, no disconnect either. We'll schedule backup for that VM at a different time to avoid the issue... Thanks a lot Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From fk at datenfalke.de Mon Feb 24 15:41:20 2020 From: fk at datenfalke.de (Falco Kleinschmidt) Date: Mon, 24 Feb 2020 15:41:20 +0100 Subject: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 In-Reply-To: References: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> Message-ID: <21962699-eba9-53d8-6b3a-8d98f4dd48a4@datenfalke.de> Am 20.02.20 um 14:47 schrieb Eneko Lacunza: > Have you tried setting (bandwidth) limits on the backup jobs and see if >>> that helps ? > Not really. I've looked through the docs, but seems I can only affect > write bandwith on NAS (only has backups). This would affect read I > guess... You can set bandwidth limits in /etc/pve/datacenter.cfg https://pve.proxmox.com/wiki/Manual:_datacenter.cfg I am doing this because of to much IO-Wait on Nodes when I am restoring. bwlimit: default=61440 -- Datenfalke - Dipl. Inf. Falco Kleinschmidt Adresse: Semperstra?e 11 - 45138 Essen Steuer-Nr: DE248267798 Telefon: +49-201-6124650 Email: fk at datenfalke.de WWW: https://www.datenfalke.de From elacunza at binovo.es Mon Feb 24 16:31:47 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Mon, 24 Feb 2020 16:31:47 +0100 Subject: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 In-Reply-To: <21962699-eba9-53d8-6b3a-8d98f4dd48a4@datenfalke.de> References: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> <21962699-eba9-53d8-6b3a-8d98f4dd48a4@datenfalke.de> Message-ID: <99dc538f-7368-88e5-2cde-ef5d9921458e@binovo.es> Hi, El 24/2/20 a las 15:41, Falco Kleinschmidt escribi?: > Am 20.02.20 um 14:47 schrieb Eneko Lacunza: >> Have you tried setting (bandwidth) limits on the backup jobs and see if >>>> that helps ? >> Not really. I've looked through the docs, but seems I can only affect >> write bandwith on NAS (only has backups). This would affect read I >> guess... > You can set bandwidth limits in /etc/pve/datacenter.cfg > > https://pve.proxmox.com/wiki/Manual:_datacenter.cfg > > I am doing this because of to much IO-Wait on Nodes when I am restoring. > > bwlimit: default=61440 But I don't want to limit migration for example :) Also, it isn't clear wheter it will affect backups at all... Thanks Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From james at panic.com Mon Feb 24 18:31:31 2020 From: james at panic.com (James Moore) Date: Mon, 24 Feb 2020 09:31:31 -0800 Subject: [PVE-User] pvelocalhost In-Reply-To: <0FFAD759-B378-4630-B57D-49CB7EAAF2E5@kmi.com> References: <0FFAD759-B378-4630-B57D-49CB7EAAF2E5@kmi.com> Message-ID: <8D4A934B-5B1B-4F64-9877-98BCC60FB568@panic.com> I believe the pve-cluster service refused to start. It's been awhile though and my memory might be faulty. > On Feb 22, 2020, at 5:24 AM, Stefan M. Radman wrote: > > James, > > I removed the pvelocalhost from /etc/hosts of a recently upgraded cluster (5.4=>6.1) and have not seen any negative impact up to now. > Several node reboots without any errors. > > When the server "stopped booting properly", what errors were you seeing? > > Stefan > >> On Feb 21, 2020, at 23:43, James Moore via pve-user wrote: >> >> >> From: James Moore >> Subject: Re: [PVE-User] pvelocalhost >> Date: February 21, 2020 at 23:43:25 GMT+1 >> To: proxmox at elchaka.de >> Cc: PVE User List >> >> >> FWIW, I very recently installed PVE 6 for the first time and when my ansible script rewrote /etc/hosts and dropped pvelocalhost my server stopped booting properly until I restored it. >> >> >> >>> On Feb 21, 2020, at 2:23 PM, proxmox at elchaka.de wrote: >>> >>> IIRC this was necessary in pve4/(5?) but not in 6 anymore. >>> >>> Hth >>> Mehmet >>> >>> Am 19. Februar 2020 12:55:33 MEZ schrieb "Stefan M. Radman via pve-user" : >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at pve.proxmox.com >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > > CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession. > From fk at datenfalke.de Tue Feb 25 12:18:42 2020 From: fk at datenfalke.de (Falco Kleinschmidt) Date: Tue, 25 Feb 2020 12:18:42 +0100 Subject: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 In-Reply-To: <99dc538f-7368-88e5-2cde-ef5d9921458e@binovo.es> References: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> <21962699-eba9-53d8-6b3a-8d98f4dd48a4@datenfalke.de> <99dc538f-7368-88e5-2cde-ef5d9921458e@binovo.es> Message-ID: <240900d8-c94f-256e-c74f-10821935a9a4@datenfalke.de> > >> You can set bandwidth limits in /etc/pve/datacenter.cfg >> >> https://pve.proxmox.com/wiki/Manual:_datacenter.cfg >> >> I am doing this because of to much IO-Wait on Nodes when I am restoring. >> >> bwlimit: default=61440 > But I don't want to limit migration for example :) > > Also, it isn't clear wheter it will affect backups at all... Ok you are right, my backups are not affected by bwlimit: default=61440. I checked in the history diagramm. -- Datenfalke - Dipl. Inf. Falco Kleinschmidt Adresse: Semperstra?e 11 - 45138 Essen Steuer-Nr: DE248267798 Telefon: +49-201-6124650 Email: fk at datenfalke.de WWW: https://www.datenfalke.de From leandro at tecnetmza.com.ar Tue Feb 25 18:15:07 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Tue, 25 Feb 2020 14:15:07 -0300 Subject: [PVE-User] setting storage layout at install time Message-ID: Hi guys, Im trying to get my pve ready for use. I want to create following layout on mi 5TB drive): 1T for LVM data 1T For LVM-thin. 3T Remaining for future partitioning. So at the hard disk options on the installer I chose : hdsize 2000 (GB). swapsize (left blank). maxroot (left blank). min free: 1000 maxxvz:1000. After install process finished and access to web gui I have: pve local(pve) Usage 1.76% (1.66 GiB of 93.99 GiB) (this is not OK). local-lvm (pve) Usage 0.00% (0 B of 877.59 GiB) (this is very close to 1Tb .. so its ok). Question is , how should I set hard disk options during installation process to accomplish wanted layout ? Regards, Leandro. From s.ivanov at proxmox.com Tue Feb 25 18:43:41 2020 From: s.ivanov at proxmox.com (Stoiko Ivanov) Date: Tue, 25 Feb 2020 18:43:41 +0100 Subject: [PVE-User] Debian buster, systemd, container and nesting=1 In-Reply-To: <20200218154426.GG3479@sv.lnf.it> References: <20200218154426.GG3479@sv.lnf.it> Message-ID: <20200225184341.4b1db066@rosa.proxmox.com> Hi, On Tue, 18 Feb 2020 16:44:26 +0100 Marco Gaiarin wrote: > I'm still on PVE 5.4. > > I've upgraded a (privileged) LXC container to debian buster, that was > originally installed as debian jessie, then upgraded to stretch, but > still without systemd. > Upgrading to buster trigger systemd installation. > > After installation, most of the services, not all, does not start, eg > apache: > > root at vnc:~# systemctl status apache2.service > ? apache2.service - The Apache HTTP Server > Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) > Active: failed (Result: exit-code) since Tue 2020-02-18 16:06:35 CET; 44s ago > Docs: https://httpd.apache.org/docs/2.4/ > Process: 120 ExecStart=/usr/sbin/apachectl start (code=exited, status=226/NAMESPACE) > > feb 18 16:06:35 vnc systemd[1]: Starting The Apache HTTP Server... > feb 18 16:06:35 vnc systemd[120]: apache2.service: Failed to set up mount namespacing: Permission denied > feb 18 16:06:35 vnc systemd[120]: apache2.service: Failed at step NAMESPACE spawning /usr/sbin/apachectl: Permission denied > feb 18 16:06:35 vnc systemd[1]: apache2.service: Control process exited, code=exited, status=226/NAMESPACE > feb 18 16:06:35 vnc systemd[1]: apache2.service: Failed with result 'exit-code'. > feb 18 16:06:35 vnc systemd[1]: Failed to start The Apache HTTP Server. > > google say me to add 'nesting=1' to 'features', that works, but looking at: > > https://pve.proxmox.com/wiki/Linux_Container > > i read: > > nesting= (default = 0) > Allow nesting. Best used with unprivileged containers with additional id mapping. Note that this will expose procfs and sysfs contents of the host to the guest. > > > i can convert this container to an unprivileged ones, but other no, for > examples some containers are samba domain controller, that need a > privileged container. not sure - but why would a samba need to be privileged? > > > There's another/better way to make systemd work on containers? I guess my preferred actions in order: * setup new unprivileged container and migrate the workload/services from the old one (optionally enabling nesting if needed) * try backup/restore to get a privileged container to an unprivileged one * keep the privileged container with nesting off * migrate the setup into a qemu-guest * edit the unit files of the affected services (e.g. apache) - usually it's the PrivateTmp option which causes this (it wants to mount --rbind -o rw /) - and drop the PrivateTmp option (see [0]) * consider making an apparmor override for this particular mount combination+container (which also can potentially be a security hole (some apparmor rules are bound to absolute paths and using rbind you can change the path) * turn on nesting for a privileged container (keep in mind that you then open it up quite a bit for breakouts) of course probably not all of those options can be applied in your environment. > > > Thanks. > I hope this helps! stoiko [0]https://forum.proxmox.com/threads/apache2-service-failed-to-set-up-mount-namespacing-permission-denied.56871/ From proxmox-user at mattern.org Tue Feb 25 20:03:03 2020 From: proxmox-user at mattern.org (proxmox-user at mattern.org) Date: Tue, 25 Feb 2020 20:03:03 +0100 Subject: [PVE-User] setting storage layout at install time In-Reply-To: References: Message-ID: <8ac92352-aa77-8852-1c4d-e1d4f2709f15@mattern.org> Hi, imho you should use a Debian Buster install CD and create a layout whatever you want. The Debian installer is really flexible. You can't do LVM Thin there but a LVM-Thin Pool is only a special LVM LV. So you can create two LVs and convert one LV to a Thin-Pool later (https://pve.proxmox.com/wiki/Storage:_LVM_Thin). Or create two VGs. Then go on with: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster. Regards Am 25.02.20 um 18:15 schrieb Leandro Roggerone: > Hi guys, Im trying to get my pve ready for use. > I want to create following layout on mi 5TB drive): > 1T for LVM data > 1T For LVM-thin. > 3T Remaining for future partitioning. > So at the hard disk options on the installer I chose : > hdsize 2000 (GB). > swapsize (left blank). > maxroot (left blank). > min free: 1000 > maxxvz:1000. > > After install process finished and access to web gui I have: > pve > local(pve) Usage 1.76% (1.66 GiB of 93.99 GiB) > (this is not OK). > local-lvm (pve) Usage 0.00% (0 B of 877.59 GiB) (this is very > close to 1Tb .. so its ok). > > Question is , how should I set hard disk options during installation > process to accomplish wanted layout ? > Regards, > Leandro. > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gaio at sv.lnf.it Wed Feb 26 12:01:56 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Wed, 26 Feb 2020 12:01:56 +0100 Subject: [PVE-User] Debian buster, systemd, container and nesting=1 In-Reply-To: <20200225184341.4b1db066@rosa.proxmox.com> References: <20200218154426.GG3479@sv.lnf.it> <20200225184341.4b1db066@rosa.proxmox.com> Message-ID: <20200226110156.GC3896@sv.lnf.it> Mandi! Stoiko Ivanov In chel di` si favelave... > > i can convert this container to an unprivileged ones, but other no, for > > examples some containers are samba domain controller, that need a > > privileged container. > not sure - but why would a samba need to be privileged? https://lists.samba.org/archive/samba/2019-December/227626.html samba, as AD Domain Controller, not as general 'share service', need the use of 'SYSTEM' namespace, that in containers is reserved by root. Indeed, if there's some 'caps' to relax that permit to use system namespace with unprivileged containers, they are welcomed! > > There's another/better way to make systemd work on containers? > I guess my preferred actions in order: > * setup new unprivileged container and migrate the workload/services from > the old one (optionally enabling nesting if needed) > * try backup/restore to get a privileged container to an unprivileged one > * keep the privileged container with nesting off > * migrate the setup into a qemu-guest > * edit the unit files of the affected services (e.g. apache) - usually > it's the PrivateTmp option which causes this (it wants to mount --rbind > -o rw /) - and drop the PrivateTmp option (see [0]) > * consider making an apparmor override for this particular mount > combination+container (which also can potentially be a security hole > (some apparmor rules are bound to absolute paths and using rbind you can > change the path) > * turn on nesting for a privileged container (keep in mind that you then > open it up quite a bit for breakouts) > of course probably not all of those options can be applied in your > environment. > [0]https://forum.proxmox.com/threads/apache2-service-failed-to-set-up-mount-namespacing-permission-denied.56871/ Mmmh... i'm a bit confused. Firstly, it is not clear to me if nesting is needed because the container is privileged, or privileged/unprivileged and nesting/non nesting are property totally indipendent. Second, in a PVE6 installation i've creared a debian buster container (unprivileged, without nesting), installed apache and run correctly, without tackling systemd units: root at vbaculalpb:~# systemctl status apache2 ? apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-02-26 11:35:29 CET; 15min ago Docs: https://httpd.apache.org/docs/2.4/ Main PID: 1992 (apache2) Tasks: 54 (limit: 4915) Memory: 6.7M CGroup: /system.slice/apache2.service ??1992 /usr/sbin/apache2 -k start ??1994 /usr/sbin/apache2 -k start ??1995 /usr/sbin/apache2 -k start feb 26 11:35:29 vbaculalpb systemd[1]: Starting The Apache HTTP Server... feb 26 11:35:29 vbaculalpb systemd[1]: Started The Apache HTTP Server. root at vbaculalpb:~# systemctl show apache2 | grep PrivateTmp PrivateTmp=yes This could lead to the answer to first question (nesting is needed only for privileged containers), but also could lead to the fact that container management could be diffierent between PVE5 (the original request) and PVE6 (this test). So, thanks for the answer but i hope in some more clue. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From lists at merit.unu.edu Wed Feb 26 15:35:46 2020 From: lists at merit.unu.edu (mj) Date: Wed, 26 Feb 2020 15:35:46 +0100 Subject: [PVE-User] live migration amd - intel Message-ID: <1e8a40d6-cdc5-5344-1c55-9976ac31c9eb@merit.unu.edu> Hi, Just to make sure I understand something. We have an identical three-node hyperconverged pve cluster, on Intel Xeon CPU's. Now we would like to expand, and we are investigating what path to choose. The way we understand it is: if we would like to do live migrations between pve hosts, the servers need to have similar CPUs, or otherwise we need to virtualise the CPU as well. (so set CPU type to kvm64) In case it matters: 90% of our VMs are debian 9/10 hosts. While we could do change to kvm64, we currently use cpu type 'host', and we wonder how much performance kvm64 would cost us, and what potential other drawbacks this could have. Has anyone ever done much testing on this subject? Anyone with interesting insights / knowledge / experiences to share on this subject? MJ From s.reiter at proxmox.com Wed Feb 26 16:14:09 2020 From: s.reiter at proxmox.com (Stefan Reiter) Date: Wed, 26 Feb 2020 16:14:09 +0100 Subject: [PVE-User] live migration amd - intel In-Reply-To: <1e8a40d6-cdc5-5344-1c55-9976ac31c9eb@merit.unu.edu> References: <1e8a40d6-cdc5-5344-1c55-9976ac31c9eb@merit.unu.edu> Message-ID: <5135e2c8-7825-8c32-fbc2-dbe27c89b86c@proxmox.com> Hi! On 2/26/20 3:35 PM, mj wrote: > Hi, > > Just to make sure I understand something. We have an identical > three-node hyperconverged pve cluster, on Intel Xeon CPU's. > > Now we would like to expand, and we are investigating what path to choose. > > The way we understand it is: if we would like to do live migrations > between pve hosts, the servers need to have similar CPUs, or otherwise > we need to virtualise the CPU as well. (so set CPU type to kvm64) > Note that even kvm64 does not really work for cross-vendor live-migrations (i.e. AMD <-> Intel as mentioned in subject). This was recently discussed on the pve-devel list [0], as well as an older bug report [1]. [0] https://pve.proxmox.com/pipermail/pve-devel/2020-February/041750.html [1] https://bugzilla.proxmox.com/show_bug.cgi?id=1660 > In case it matters: 90% of our VMs are debian 9/10 hosts. > > While we could do change to kvm64, we currently use cpu type 'host', and > we wonder how much performance kvm64 would cost us, and what potential > other drawbacks this could have. > Aside from the above, the performance impact of switching to kvm64 is largely dependent on the software you are running within your VM. Raw CPU performance will stay the same, but most hardware acceleration (e.g. AES for encryption or AVX) will not be available to the guest. If the CPUs are from the same vendor, you can most likely enable the older generation CPU type for your VMs and still get away with live-migration. This way you can use _most_ acceleration features, and are just missing out on any new ones introduced in the newer CPU gen. > Has anyone ever done much testing on this subject? Anyone with > interesting insights / knowledge / experiences to share on this subject? > > MJ Hope that helps, Stefan From leandro at tecnetmza.com.ar Wed Feb 26 16:34:02 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Wed, 26 Feb 2020 12:34:02 -0300 Subject: [PVE-User] setting storage layout at install time In-Reply-To: <8ac92352-aa77-8852-1c4d-e1d4f2709f15@mattern.org> References: <8ac92352-aa77-8852-1c4d-e1d4f2709f15@mattern.org> Message-ID: Ok , this is what I set during installation process: hd size: 2000. swapsize: blank maxroot: blank minfree: 1000 maxvz: 1000. After install , this is what I got: root at pve:~# vgdisplay -v pve --- Volume group --- VG Name pve System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 7 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.95 TiB PE Size 4.00 MiB Total PE 511871 Alloc PE / Size 255872 / 999.50 GiB Free PE / Size 255999 / <1000.00 GiB VG UUID zPby6x-GbsC-celw-r6ZU-3t1F-DLjX-i8CBv2 --- Logical volume --- LV Path /dev/pve/swap LV Name swap VG Name pve LV UUID ukZpGT-Mtxo-vWes-OUGT-tOXt-Nsd1-Wnrszd LV Write Access read/write LV Creation host, time proxmox, 2020-02-25 13:46:43 -0300 LV Status available # open 2 LV Size 8.00 GiB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/pve/root LV Name root VG Name pve LV UUID 2tQrYn-igFx-gqU1-cc8j-wEM6-3q4o-3IpVKe LV Write Access read/write LV Creation host, time proxmox, 2020-02-25 13:46:43 -0300 LV Status available # open 1 LV Size 96.00 GiB ############### I need to get this partition to 1TB. Current LE 24576 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Name data VG Name pve LV UUID XQfOWm-L3fC-EtcE-4I7e-euC9-NH2g-fnWBqg LV Write Access read/write LV Creation host, time proxmox, 2020-02-25 13:46:44 -0300 LV Pool metadata data_tmeta LV Pool data data_tdata LV Status available # open 0 LV Size <877.59 GiB Allocated pool data 0.00% Allocated metadata 0.22% Current LE 224662 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 --- Physical volumes --- PV Name /dev/sda3 PV UUID ZpwZGJ-h774-VKh8-f81D-Evyi-d0jD-7stBLa PV Status allocatable Total PE / Free PE 511871 / 255999 How to set "Hard disk options" to accomplish my desired schemme: LVM (/dev/pve/root) = 1T. LVM-thin = 1TB. Remaining =< 3TB. reserved for future use. Regards, Leandro. El mar., 25 feb. 2020 a las 16:03, escribi?: > Hi, > > imho you should use a Debian Buster install CD and create a layout > whatever you want. The Debian installer is really flexible. You can't do > LVM Thin there but a LVM-Thin Pool is only a special LVM LV. So you can > create two LVs and convert one LV to a Thin-Pool later > (https://pve.proxmox.com/wiki/Storage:_LVM_Thin). Or create two VGs. > > Then go on with: > https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster. > > Regards > > > Am 25.02.20 um 18:15 schrieb Leandro Roggerone: > > Hi guys, Im trying to get my pve ready for use. > > I want to create following layout on mi 5TB drive): > > 1T for LVM data > > 1T For LVM-thin. > > 3T Remaining for future partitioning. > > So at the hard disk options on the installer I chose : > > hdsize 2000 (GB). > > swapsize (left blank). > > maxroot (left blank). > > min free: 1000 > > maxxvz:1000. > > > > After install process finished and access to web gui I have: > > pve > > local(pve) Usage 1.76% (1.66 GiB of 93.99 GiB) > > (this is not OK). > > local-lvm (pve) Usage 0.00% (0 B of 877.59 GiB) (this is > very > > close to 1Tb .. so its ok). > > > > Question is , how should I set hard disk options during installation > > process to accomplish wanted layout ? > > Regards, > > Leandro. > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From leandro at tecnetmza.com.ar Wed Feb 26 18:17:29 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Wed, 26 Feb 2020 14:17:29 -0300 Subject: [PVE-User] proxmox install "Advanced LVM Configuration Options" Message-ID: While trying to get my desired partitioning scheme. LVM = 1TB LVM-thin = 1TB Remaining = reserved for further use I read the Advanced LVM configuration options but didn't understand the use of hdsize, maxroot, minfree and maxvz so , then tried multiple combinations with following result. +-----------+------+------+------+------+------+ | hd size | 2000 | 2000 | 2000 | 2000 | 2000 | +-----------+------+------+------+------+------+ | swap size | | | | | | +-----------+------+------+------+------+------+ | maxroot | | | | 1000 | | +-----------+------+------+------+------+------+ | min free | 1000 | | | | 1000 | +-----------+------+------+------+------+------+ | maxvz | 1000 | 1000 | | | | +-----------+------+------+------+------+------+ | result (gb) | +-----------+------+------+------+------+------+ | LVM | 96 | 96 | 96 | 96 | 96 | +-----------+------+------+------+------+------+ | LMVthin | 980 | 980 | 1800 | 1800 | 877 | +-----------+------+------+------+------+------+ As you can see , LVM partition is always 96GB. 1)Can I change this result with proper combination of hdsize, maxroot,minfree and maxvz params? As I mentioned I need it to be 1TB. 2) Should I enlarge it later, after install process ? Can you point some doc with this process? Regards, Leandro. From lists at merit.unu.edu Thu Feb 27 11:09:07 2020 From: lists at merit.unu.edu (mj) Date: Thu, 27 Feb 2020 11:09:07 +0100 Subject: [PVE-User] live migration amd - intel In-Reply-To: <5135e2c8-7825-8c32-fbc2-dbe27c89b86c@proxmox.com> References: <1e8a40d6-cdc5-5344-1c55-9976ac31c9eb@merit.unu.edu> <5135e2c8-7825-8c32-fbc2-dbe27c89b86c@proxmox.com> Message-ID: <903b445a-ff22-8ac0-627e-176cacc8f5c2@merit.unu.edu> Hi Stefan, Thanks for your reply! MJ On 2/26/20 4:14 PM, Stefan Reiter wrote: > Hi! > > On 2/26/20 3:35 PM, mj wrote: >> Hi, >> >> Just to make sure I understand something. We have an identical >> three-node hyperconverged pve cluster, on Intel Xeon CPU's. >> >> Now we would like to expand, and we are investigating what path to >> choose. >> >> The way we understand it is: if we would like to do live migrations >> between pve hosts, the servers need to have similar CPUs, or otherwise >> we need to virtualise the CPU as well. (so set CPU type to kvm64) >> > > Note that even kvm64 does not really work for cross-vendor > live-migrations (i.e. AMD <-> Intel as mentioned in subject). This was > recently discussed on the pve-devel list [0], as well as an older bug > report [1]. > > [0] https://pve.proxmox.com/pipermail/pve-devel/2020-February/041750.html > [1] https://bugzilla.proxmox.com/show_bug.cgi?id=1660 > >> In case it matters: 90% of our VMs are debian 9/10 hosts. >> >> While we could do change to kvm64, we currently use cpu type 'host', >> and we wonder how much performance kvm64 would cost us, and what >> potential other drawbacks this could have. >> > > Aside from the above, the performance impact of switching to kvm64 is > largely dependent on the software you are running within your VM. Raw > CPU performance will stay the same, but most hardware acceleration (e.g. > AES for encryption or AVX) will not be available to the guest. > > If the CPUs are from the same vendor, you can most likely enable the > older generation CPU type for your VMs and still get away with > live-migration. This way you can use _most_ acceleration features, and > are just missing out on any new ones introduced in the newer CPU gen. > >> Has anyone ever done much testing on this subject? Anyone with >> interesting insights / knowledge / experiences to share on this subject? >> >> MJ > > Hope that helps, > > Stefan > From s.ivanov at proxmox.com Thu Feb 27 16:26:08 2020 From: s.ivanov at proxmox.com (Stoiko Ivanov) Date: Thu, 27 Feb 2020 16:26:08 +0100 Subject: [PVE-User] Debian buster, systemd, container and nesting=1 In-Reply-To: <20200226110156.GC3896@sv.lnf.it> References: <20200218154426.GG3479@sv.lnf.it> <20200225184341.4b1db066@rosa.proxmox.com> <20200226110156.GC3896@sv.lnf.it> Message-ID: <20200227162608.3135a2ea@rosa.proxmox.com> On Wed, 26 Feb 2020 12:01:56 +0100 Marco Gaiarin wrote: > Mandi! Stoiko Ivanov > In chel di` si favelave... > > > > i can convert this container to an unprivileged ones, but other no, for > > > examples some containers are samba domain controller, that need a > > > privileged container. > > not sure - but why would a samba need to be privileged? > > https://lists.samba.org/archive/samba/2019-December/227626.html > > samba, as AD Domain Controller, not as general 'share service', need > the use of 'SYSTEM' namespace, that in containers is reserved by root. > Indeed, if there's some 'caps' to relax that permit to use system > namespace with unprivileged containers, they are welcomed! AFAICU one robust (although not very performant way) to run a AD DC with NTACLs on a unprivileged container would be to use the xattr_tdb module (not actively tested though): https://wiki.samba.org/index.php/Using_the_xattr_tdb_VFS_Module > > > > > There's another/better way to make systemd work on containers? > > I guess my preferred actions in order: > > * setup new unprivileged container and migrate the workload/services from > > the old one (optionally enabling nesting if needed) > > * try backup/restore to get a privileged container to an unprivileged one > > * keep the privileged container with nesting off > > * migrate the setup into a qemu-guest > > * edit the unit files of the affected services (e.g. apache) - usually > > it's the PrivateTmp option which causes this (it wants to mount --rbind > > -o rw /) - and drop the PrivateTmp option (see [0]) > > * consider making an apparmor override for this particular mount > > combination+container (which also can potentially be a security hole > > (some apparmor rules are bound to absolute paths and using rbind you can > > change the path) > > * turn on nesting for a privileged container (keep in mind that you then > > open it up quite a bit for breakouts) > > of course probably not all of those options can be applied in your > > environment. > > [0]https://forum.proxmox.com/threads/apache2-service-failed-to-set-up-mount-namespacing-permission-denied.56871/ > > Mmmh... i'm a bit confused. > > Firstly, it is not clear to me if nesting is needed because the > container is privileged, or privileged/unprivileged and nesting/non > nesting are property totally indipendent. They are independent - a good explanation of what nesting does can be found in our source: https://git.proxmox.com/?p=pve-container.git;a=blob;f=src/PVE/LXC.pm;h=34ca2a357294f63e8b49d965bd54c24905642e17;hb=HEAD#l581 (it allows among other things to mount /proc, and /sys, which is problematic for privileged containers The issue with apache('s systemd-unit) in the privileged container, is that the mount is denied by apparmor (the apparmor rules are stricter for privileged containers, than for unprivileged, because if someone breaks out of an unprivileged container they are only a regular user on the host) I hope this explains it. stoiko > > Second, in a PVE6 installation i've creared a debian buster container > (unprivileged, without nesting), installed apache and run correctly, > without tackling systemd units: > > root at vbaculalpb:~# systemctl status apache2 > ? apache2.service - The Apache HTTP Server > Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) > Active: active (running) since Wed 2020-02-26 11:35:29 CET; 15min ago > Docs: https://httpd.apache.org/docs/2.4/ > Main PID: 1992 (apache2) > Tasks: 54 (limit: 4915) > Memory: 6.7M > CGroup: /system.slice/apache2.service > ??1992 /usr/sbin/apache2 -k start > ??1994 /usr/sbin/apache2 -k start > ??1995 /usr/sbin/apache2 -k start > > feb 26 11:35:29 vbaculalpb systemd[1]: Starting The Apache HTTP Server... > feb 26 11:35:29 vbaculalpb systemd[1]: Started The Apache HTTP Server. > root at vbaculalpb:~# systemctl show apache2 | grep PrivateTmp > PrivateTmp=yes > > This could lead to the answer to first question (nesting is needed only > for privileged containers), but also could lead to the fact that > container management could be diffierent between PVE5 (the original > request) and PVE6 (this test). > > > So, thanks for the answer but i hope in some more clue. > From thomas.naumann at ovgu.de Thu Feb 27 16:26:27 2020 From: thomas.naumann at ovgu.de (Naumann, Thomas) Date: Thu, 27 Feb 2020 15:26:27 +0000 Subject: [PVE-User] Best practice for quorum device between data centers? Message-ID: <14f63ea1db046384cbda2b28b957a2eb5ca1e692.camel@ovgu.de> Hi at all, we plan to build a inter-site proxmox cluster between two data centers. In total there are 8 physical cluster nodes, 4 in each data center and 2 cisco routers per data center. The physical connections between the 4 routers are 8x optical fiber, each 40GB/s. Unfortunatly there is no third independent location for a quorum device. So question is: what is the best practice to set up a quorum device for such a production cluster? -- Thomas Naumann From leandro at tecnetmza.com.ar Thu Feb 27 17:20:17 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Thu, 27 Feb 2020 13:20:17 -0300 Subject: [PVE-User] proxmox install "Advanced LVM Configuration Options" (solved) Message-ID: Hi guys, after reading and trying lvm commands I got my partitions resized using following commands: 16 lvextend -L 1T /dev/pve/root 17 resize2fs /dev/pve/root and for lvmthin: ?lvextend -l +100%FREE /dev/centos/var?. Then you can use lvdisplay to check new volume size and vgdisplay to check remaining space in group. Hope it can help someone. Regards. Leandro. From leandro at tecnetmza.com.ar Thu Feb 27 17:29:15 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Thu, 27 Feb 2020 13:29:15 -0300 Subject: [PVE-User] Create proxmox cluster / storage question. Message-ID: Hi guys , i'm still tunning my 5.5 Tb server. While setting storage options during install process, I set 2000 for hd size, so I have 3.5 TB free to assign later. my layout is as follows: root at pve:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 5.5T 0 disk ??sda1 8:1 0 1007K 0 part ??sda2 8:2 0 512M 0 part ??sda3 8:3 0 2T 0 part ??pve-swap 253:0 0 8G 0 lvm [SWAP] ??pve-root 253:1 0 1T 0 lvm / ??pve-data_tmeta 253:2 0 9G 0 lvm ? ??pve-data 253:4 0 949.6G 0 lvm ??pve-data_tdata 253:3 0 949.6G 0 lvm ??pve-data 253:4 0 949.6G 0 lvm sr0 11:0 1 1024M 0 rom My question is: Is it possible to expand sda3 partition later without service outage ? Is it possible to expand pve group on sda3 partition ? In case to create a proxmox cluster, what should I do with that 3.5 TB free ? Is there a best partition type suited for this ? Can I do it without service outage? I have not any service running yet , so I can experiment what it takes. Any thought about this would be great. Leandro. From alex at calicolabs.com Thu Feb 27 18:49:32 2020 From: alex at calicolabs.com (Alex Chekholko) Date: Thu, 27 Feb 2020 09:49:32 -0800 Subject: [PVE-User] Best practice for quorum device between data centers? In-Reply-To: <14f63ea1db046384cbda2b28b957a2eb5ca1e692.camel@ovgu.de> References: <14f63ea1db046384cbda2b28b957a2eb5ca1e692.camel@ovgu.de> Message-ID: Hi Thomas, I think you have to choose: if they split-brain, which of the two sides do you want to keep working? Put the 9th node there. Regards, Alex On Thu, Feb 27, 2020 at 7:26 AM Naumann, Thomas wrote: > Hi at all, > > we plan to build a inter-site proxmox cluster between two data centers. > In total there are 8 physical cluster nodes, 4 in each data center and > 2 cisco routers per data center. The physical connections between the 4 > routers are 8x optical fiber, each 40GB/s. Unfortunatly there is no > third independent location for a quorum device. > So question is: what is the best practice to set up a quorum device for > such a production cluster? > -- > Thomas Naumann > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From tom at chajuti.de Thu Feb 27 20:29:45 2020 From: tom at chajuti.de (Thomas Naumann) Date: Thu, 27 Feb 2020 20:29:45 +0100 Subject: [PVE-User] Best practice for quorum device between data centers? In-Reply-To: References: <14f63ea1db046384cbda2b28b957a2eb5ca1e692.camel@ovgu.de> Message-ID: <9f81e676-3abf-e6c7-1559-47f4f53b859f@chajuti.de> hi alex, thanks for your response... that was our first idea too, but i don?t like it... what about the idea of a virtual quorum device (VM) inside the cluster? in worst case scenario (all physical connections between data centers are broken) where will be manual choise to start virtual quorum on this or that side of the cluster. Has anyone experience about this? regards, thomas On 2/27/20 6:49 PM, Alex Chekholko via pve-user wrote: > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From aderumier at odiso.com Thu Feb 27 21:04:59 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Thu, 27 Feb 2020 21:04:59 +0100 (CET) Subject: [PVE-User] Best practice for quorum device between data centers? In-Reply-To: <9f81e676-3abf-e6c7-1559-47f4f53b859f@chajuti.de> References: <14f63ea1db046384cbda2b28b957a2eb5ca1e692.camel@ovgu.de> <9f81e676-3abf-e6c7-1559-47f4f53b859f@chajuti.de> Message-ID: <191439669.3880933.1582833899689.JavaMail.zimbra@odiso.com> Well you could use a virtual quorum vm, but dc1: 4 nodes + vm ha dc2: 4nodes you loose dc1 -> you loose quorum on dc2, so you can't start vm on dc2. so it's not helping. you really don't want to use HA here. but you can still play with "pvecm expected" to get back the quorum if 1 dc is down. (and maybe do dc1: 5 nodes - dc2: 4 nodes, to avoid loose quorum on both side) (also I don't known what is your storage ? ) ----- Mail original ----- De: "Thomas Naumann" ?: "proxmoxve" Envoy?: Jeudi 27 F?vrier 2020 20:29:45 Objet: Re: [PVE-User] Best practice for quorum device between data centers? hi alex, thanks for your response... that was our first idea too, but i don?t like it... what about the idea of a virtual quorum device (VM) inside the cluster? in worst case scenario (all physical connections between data centers are broken) where will be manual choise to start virtual quorum on this or that side of the cluster. Has anyone experience about this? regards, thomas On 2/27/20 6:49 PM, Alex Chekholko via pve-user wrote: > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From john at john-thomas.com Thu Feb 27 22:03:56 2020 From: john at john-thomas.com (John Thomas) Date: Thu, 27 Feb 2020 13:03:56 -0800 Subject: [PVE-User] Best practice for quorum device between data centers? In-Reply-To: <191439669.3880933.1582833899689.JavaMail.zimbra@odiso.com> References: <14f63ea1db046384cbda2b28b957a2eb5ca1e692.camel@ovgu.de> <9f81e676-3abf-e6c7-1559-47f4f53b859f@chajuti.de> <191439669.3880933.1582833899689.JavaMail.zimbra@odiso.com> Message-ID: Maybe have a tiny PVE at a third location (call it PVE-quorum)? That way, if one data center goes off line, you have the solution you need, the "up" data center still having a quorum. JT On Thu, Feb 27, 2020 at 12:05 PM Alexandre DERUMIER wrote: > Well you could use a virtual quorum vm, but > > > dc1: 4 nodes + vm ha dc2: 4nodes > > you loose dc1 -> you loose quorum on dc2, so you can't start vm on dc2. > so it's not helping. > > you really don't want to use HA here. but you can still play with "pvecm > expected" to get back the quorum > if 1 dc is down. (and maybe do dc1: 5 nodes - dc2: 4 nodes, to avoid > loose quorum on both side) > > > (also I don't known what is your storage ? ) > > > ----- Mail original ----- > De: "Thomas Naumann" > ?: "proxmoxve" > Envoy?: Jeudi 27 F?vrier 2020 20:29:45 > Objet: Re: [PVE-User] Best practice for quorum device between data centers? > > hi alex, > > thanks for your response... > that was our first idea too, but i don?t like it... > what about the idea of a virtual quorum device (VM) inside the cluster? > in worst case scenario (all physical connections between data centers > are broken) where will be manual choise to start virtual quorum on this > or that side of the cluster. Has anyone experience about this? > > regards, > thomas > On 2/27/20 6:49 PM, Alex Chekholko via pve-user wrote: > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From zorlin at gmail.com Fri Feb 28 03:47:42 2020 From: zorlin at gmail.com (Benjamin) Date: Fri, 28 Feb 2020 10:47:42 +0800 Subject: [PVE-User] Typo in Proxmox VE 5.4-13 Message-ID: Hi there, This typo may have already been fixed (thus why I included the version number) but... Within Proxmox Node -> System tab, one of the descriptions says "PVE Cluster Ressource Manager Daemon" - it should be "Resource" with one S, not "Ressource" Similarly, in the same place, "PVE Local HA Ressource Manager Daemon" has the same issue Hope that helps, Thanks, ~ B From d.jaeger at proxmox.com Fri Feb 28 08:41:29 2020 From: d.jaeger at proxmox.com (Dominic =?iso-8859-1?Q?J=E4ger?=) Date: Fri, 28 Feb 2020 08:41:29 +0100 Subject: [PVE-User] Typo in Proxmox VE 5.4-13 In-Reply-To: References: Message-ID: <20200228074129.GA4926@mala.proxmox.com> Hi, thank you for helping improve Proxmox VE! On Fri, Feb 28, 2020 at 10:47:42AM +0800, Benjamin wrote: > This typo may have already been fixed (thus why I included the version > number) but... > > Within Proxmox Node -> System tab, one of the descriptions says "PVE > Cluster Ressource Manager Daemon" - it should be "Resource" with one S, not > "Ressource" > > Similarly, in the same place, "PVE Local HA Ressource Manager Daemon" has > the same issue This mistake has been fixed already, it is written correctly in 6.1-7. Nonetheless, the word is still wrong in some places in our source code. Probably because it has two "s" in German. If you find something similar, please reach out again. I'd be happy to check it :) Best regards, Dominic From a.lauterer at proxmox.com Fri Feb 28 09:48:38 2020 From: a.lauterer at proxmox.com (Aaron Lauterer) Date: Fri, 28 Feb 2020 09:48:38 +0100 Subject: [PVE-User] Best practice for quorum device between data centers? In-Reply-To: References: <14f63ea1db046384cbda2b28b957a2eb5ca1e692.camel@ovgu.de> <9f81e676-3abf-e6c7-1559-47f4f53b859f@chajuti.de> <191439669.3880933.1582833899689.JavaMail.zimbra@odiso.com> Message-ID: <152be1e1-c228-34c7-361b-0a29a0012ef1@proxmox.com> Any thoughts about using QDevice with the corosync-qnetd service running in a third location? AFAIK the QDevice mechanism can cope with higher latencies than corosync itself. On 2/27/20 10:03 PM, John Thomas wrote: > Maybe have a tiny PVE at a third location (call it PVE-quorum)? That way, > if one data center goes off line, you have the solution you need, the "up" > data center still having a quorum. > > JT > > > > On Thu, Feb 27, 2020 at 12:05 PM Alexandre DERUMIER > wrote: > >> Well you could use a virtual quorum vm, but >> >> >> dc1: 4 nodes + vm ha dc2: 4nodes >> >> you loose dc1 -> you loose quorum on dc2, so you can't start vm on dc2. >> so it's not helping. >> >> you really don't want to use HA here. but you can still play with "pvecm >> expected" to get back the quorum >> if 1 dc is down. (and maybe do dc1: 5 nodes - dc2: 4 nodes, to avoid >> loose quorum on both side) >> >> >> (also I don't known what is your storage ? ) >> >> >> ----- Mail original ----- >> De: "Thomas Naumann" >> ?: "proxmoxve" >> Envoy?: Jeudi 27 F?vrier 2020 20:29:45 >> Objet: Re: [PVE-User] Best practice for quorum device between data centers? >> >> hi alex, >> >> thanks for your response... >> that was our first idea too, but i don?t like it... >> what about the idea of a virtual quorum device (VM) inside the cluster? >> in worst case scenario (all physical connections between data centers >> are broken) where will be manual choise to start virtual quorum on this >> or that side of the cluster. Has anyone experience about this? >> >> regards, >> thomas >> On 2/27/20 6:49 PM, Alex Chekholko via pve-user wrote: >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From elacunza at binovo.es Fri Feb 28 09:47:58 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 28 Feb 2020 09:47:58 +0100 Subject: [PVE-User] Create proxmox cluster / storage question. In-Reply-To: References: Message-ID: Hola Leandro, El 27/2/20 a las 17:29, Leandro Roggerone escribi?: > Hi guys , i'm still tunning my 5.5 Tb server. > While setting storage options during install process, I set 2000 for hd > size, so I have 3.5 TB free to assign later. > > my layout is as follows: > root at pve:~# lsblk > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > sda 8:0 0 5.5T 0 disk > ??sda1 8:1 0 1007K 0 part > ??sda2 8:2 0 512M 0 part > ??sda3 8:3 0 2T 0 part > ??pve-swap 253:0 0 8G 0 lvm [SWAP] > ??pve-root 253:1 0 1T 0 lvm / > ??pve-data_tmeta 253:2 0 9G 0 lvm > ? ??pve-data 253:4 0 949.6G 0 lvm > ??pve-data_tdata 253:3 0 949.6G 0 lvm > ??pve-data 253:4 0 949.6G 0 lvm > sr0 11:0 1 1024M 0 rom > > My question is: > Is it possible to expand sda3 partition later without service outage ? > Is it possible to expand pve group on sda3 partition ? You don't need to expand sda3 really. You can just create a new partition, create a new PV with it and add the new PV to pve VG. > In case to create a proxmox cluster, what should I do with that 3.5 TB free > ? I don't know really how to reply to this. If you're building a cluster, I suggest you configure some kind of shared storage; NFS server or Ceph cluster for example. > Is there a best partition type suited for this ? Can I do it without > service outage? For what? > I have not any service running yet , so I can experiment what it takes. > Any thought about this would be great. Maybe you can start telling us your target use for this server/cluster. Also some detailed spec of the server would help; for example does it have a RAID card with more than one disk, or you're using a 6TB single disk? Cheers Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From tom at chajuti.de Fri Feb 28 13:36:24 2020 From: tom at chajuti.de (Thomas Naumann) Date: Fri, 28 Feb 2020 13:36:24 +0100 Subject: [PVE-User] Best practice for quorum device between data centers? In-Reply-To: <152be1e1-c228-34c7-361b-0a29a0012ef1@proxmox.com> References: <14f63ea1db046384cbda2b28b957a2eb5ca1e692.camel@ovgu.de> <9f81e676-3abf-e6c7-1559-47f4f53b859f@chajuti.de> <191439669.3880933.1582833899689.JavaMail.zimbra@odiso.com> <152be1e1-c228-34c7-361b-0a29a0012ef1@proxmox.com> Message-ID: unfortunatly there is no independent third location On 2/28/20 9:48 AM, Aaron Lauterer wrote: > Any thoughts about using QDevice with the corosync-qnetd service running > in a third location? > > AFAIK the QDevice mechanism can cope with higher latencies than corosync > itself. > > On 2/27/20 10:03 PM, John Thomas wrote: >> Maybe have a tiny PVE at a third location (call it PVE-quorum)?? That >> way, >> if one data center goes off line, you have the solution you need, the >> "up" >> data center still having a quorum. >> >> JT >> >> >> >> On Thu, Feb 27, 2020 at 12:05 PM Alexandre DERUMIER >> wrote: >> >>> Well you could use a virtual quorum vm, but >>> >>> >>> dc1: 4 nodes + vm ha?? dc2: 4nodes >>> >>> you loose dc1 -> you loose quorum on dc2, so you can't start vm on dc2. >>> so it's not helping. >>> >>> you really don't want to use HA here. but you can still play with "pvecm >>> expected" to get back the quorum >>> if 1 dc is down.? (and maybe do dc1: 5 nodes? - dc2: 4 nodes, to avoid >>> loose quorum on both side) >>> >>> >>> (also I don't known what is your storage ? ) >>> >>> >>> ----- Mail original ----- >>> De: "Thomas Naumann" >>> ?: "proxmoxve" >>> Envoy?: Jeudi 27 F?vrier 2020 20:29:45 >>> Objet: Re: [PVE-User] Best practice for quorum device between data >>> centers? >>> >>> hi alex, >>> >>> thanks for your response... >>> that was our first idea too, but i don?t like it... >>> what about the idea of a virtual quorum device (VM) inside the cluster? >>> in worst case scenario (all physical connections between data centers >>> are broken) where will be manual choise to start virtual quorum on this >>> or that side of the cluster. Has anyone experience about this? >>> >>> regards, >>> thomas >>> On 2/27/20 6:49 PM, Alex Chekholko via pve-user wrote: >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at pve.proxmox.com >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From leesteken at pm.me Fri Feb 28 13:44:54 2020 From: leesteken at pm.me (leesteken at pm.me) Date: Fri, 28 Feb 2020 12:44:54 +0000 Subject: [PVE-User] Best practice for quorum device between data centers? In-Reply-To: References: <14f63ea1db046384cbda2b28b957a2eb5ca1e692.camel@ovgu.de> <9f81e676-3abf-e6c7-1559-47f4f53b859f@chajuti.de> <191439669.3880933.1582833899689.JavaMail.zimbra@odiso.com> <152be1e1-c228-34c7-361b-0a29a0012ef1@proxmox.com> Message-ID: ??????? Original Message ??????? On Friday, February 28, 2020 1:36 PM, Thomas Naumann wrote: > unfortunatly there is no independent third location Are you physically located in one of the two datacenters? Then run it there. Otherwise, run a QDevic at the same location from where you sit and manage the cluster. If a datacenter loses connection, you want the location that you are still able to connect to to have quorum. I mean: what good will it do if you have a running cluster with quorom which you are not able to connect to to manage? Therefore, I suggest to run it at the same physical location as your desktop/laptop/office from which you manage the cluster. If that is not possible, then I don't know how to prevent a possible split-brain situation during connection problems. > On 2/28/20 9:48 AM, Aaron Lauterer wrote: > > > Any thoughts about using QDevice with the corosync-qnetd service running > > in a third location? > > AFAIK the QDevice mechanism can cope with higher latencies than corosync > > itself. > > On 2/27/20 10:03 PM, John Thomas wrote: > > > > > Maybe have a tiny PVE at a third location (call it PVE-quorum)?? That > > > way, > > > if one data center goes off line, you have the solution you need, the > > > "up" > > > data center still having a quorum. > > > JT > > > On Thu, Feb 27, 2020 at 12:05 PM Alexandre DERUMIER aderumier at odiso.com > > > wrote: > > > > > > > Well you could use a virtual quorum vm, but > > > > dc1: 4 nodes + vm ha?? dc2: 4nodes > > > > you loose dc1 -> you loose quorum on dc2, so you can't start vm on dc2. > > > > so it's not helping. > > > > you really don't want to use HA here. but you can still play with "pvecm > > > > expected" to get back the quorum > > > > if 1 dc is down.? (and maybe do dc1: 5 nodes? - dc2: 4 nodes, to avoid > > > > loose quorum on both side) > > > > (also I don't known what is your storage ? ) > > > > ----- Mail original ----- > > > > De: "Thomas Naumann" tom at chajuti.de > > > > ?: "proxmoxve" pve-user at pve.proxmox.com > > > > Envoy?: Jeudi 27 F?vrier 2020 20:29:45 > > > > Objet: Re: [PVE-User] Best practice for quorum device between data > > > > centers? > > > > hi alex, > > > > thanks for your response... > > > > that was our first idea too, but i don?t like it... > > > > what about the idea of a virtual quorum device (VM) inside the cluster? > > > > in worst case scenario (all physical connections between data centers > > > > are broken) where will be manual choise to start virtual quorum on this > > > > or that side of the cluster. Has anyone experience about this? > > > > regards, > > > > thomas > > > > On 2/27/20 6:49 PM, Alex Chekholko via pve-user wrote: > > > > > > > > > pve-user mailing list > > > > > pve-user at pve.proxmox.com > > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > > pve-user mailing list > > > > pve-user at pve.proxmox.com > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > > pve-user mailing list > > > > pve-user at pve.proxmox.com > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From leandro at tecnetmza.com.ar Fri Feb 28 14:43:59 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Fri, 28 Feb 2020 10:43:59 -0300 Subject: [PVE-User] Create proxmox cluster / storage question. In-Reply-To: References: Message-ID: Dear Eneko. Regarding your question , what is the tarjet use for this server. I have a dell R610 with 6 drive bays. Today I have 4 (2TB) drives in Raid5 , resulting a 5.5TB capacity. I will add 2 ssd drives later in raid1 for applications that need more read speed. The purpose for this server is to run proxmox with some VMs for external and internal access. Im planning to build a second server and create a cluster just to have more redundancy and availability. I would like to set all I can now that server is not in production and minimize risk later. Thats why im asking so many questions. Regards. Leandro. El vie., 28 feb. 2020 a las 5:49, Eneko Lacunza () escribi?: > Hola Leandro, > > El 27/2/20 a las 17:29, Leandro Roggerone escribi?: > > Hi guys , i'm still tunning my 5.5 Tb server. > > While setting storage options during install process, I set 2000 for hd > > size, so I have 3.5 TB free to assign later. > > > > my layout is as follows: > > root at pve:~# lsblk > > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > > sda 8:0 0 5.5T 0 disk > > ??sda1 8:1 0 1007K 0 part > > ??sda2 8:2 0 512M 0 part > > ??sda3 8:3 0 2T 0 part > > ??pve-swap 253:0 0 8G 0 lvm [SWAP] > > ??pve-root 253:1 0 1T 0 lvm / > > ??pve-data_tmeta 253:2 0 9G 0 lvm > > ? ??pve-data 253:4 0 949.6G 0 lvm > > ??pve-data_tdata 253:3 0 949.6G 0 lvm > > ??pve-data 253:4 0 949.6G 0 lvm > > sr0 11:0 1 1024M 0 rom > > > > My question is: > > Is it possible to expand sda3 partition later without service outage ? > > Is it possible to expand pve group on sda3 partition ? > You don't need to expand sda3 really. You can just create a new > partition, create a new PV with it and add the new PV to pve VG. > > > In case to create a proxmox cluster, what should I do with that 3.5 TB > free > > ? > I don't know really how to reply to this. If you're building a cluster, > I suggest you configure some kind of shared storage; NFS server or Ceph > cluster for example. > > > Is there a best partition type suited for this ? Can I do it without > > service outage? > For what? > > > I have not any service running yet , so I can experiment what it takes. > > Any thought about this would be great. > > Maybe you can start telling us your target use for this server/cluster. > Also some detailed spec of the server would help; for example does it > have a RAID card with more than one disk, or you're using a 6TB single > disk? > > Cheers > Eneko > > -- > Zuzendari Teknikoa / Director T?cnico > Binovo IT Human Project, S.L. > Telf. 943569206 > Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > www.binovo.es > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From elacunza at binovo.es Fri Feb 28 15:05:35 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Fri, 28 Feb 2020 15:05:35 +0100 Subject: [PVE-User] Create proxmox cluster / storage question. In-Reply-To: References: Message-ID: <7d5075bd-0442-6902-8296-b4cad572adba@binovo.es> Hola Leandro, El 28/2/20 a las 14:43, Leandro Roggerone escribi?: > Regarding your question , what is the tarjet use for this server. > I have a dell R610 with 6 drive bays. > Today I have 4 (2TB) drives in Raid5 , resulting a 5.5TB capacity. > I will add 2 ssd drives later in raid1 for applications that need more read > speed. > The purpose for this server is to run proxmox with some VMs for external > and internal access. > Im planning to build a second server and create a cluster just to have more > redundancy and availability. > > I would like to set all I can now that server is not in production and > minimize risk later. > Thats why im asking so many questions. Asking is good, but we need info to be able to help you ;) When you talk about redundacy and availability, do you want... - HA? (automatic restart of VMs in the other node in case one server fails) - Be able to move VMs from one server to the other "fast" (without copying the disks)? If your answer is yes to any of the previous questions, you have to look at using a NFS server or deploying Ceph. If it's no, then we can talk about local storage in your servers. What RAID card do you have in that server? Does it have write cache (non volatile of battery-backed) If it doesn't have such, RAID5 could prove slow (eat quite CPU), I suggest you use 2xRAID1 or a RAID10 setup. Also, please bear in mind that RAID5 with "big" disks is considered quite unsecure (risk of having a second disk failure during recovery is high). Saludos Eneko > Regards. > Leandro. > > > El vie., 28 feb. 2020 a las 5:49, Eneko Lacunza () > escribi?: > >> Hola Leandro, >> >> El 27/2/20 a las 17:29, Leandro Roggerone escribi?: >>> Hi guys , i'm still tunning my 5.5 Tb server. >>> While setting storage options during install process, I set 2000 for hd >>> size, so I have 3.5 TB free to assign later. >>> >>> my layout is as follows: >>> root at pve:~# lsblk >>> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT >>> sda 8:0 0 5.5T 0 disk >>> ??sda1 8:1 0 1007K 0 part >>> ??sda2 8:2 0 512M 0 part >>> ??sda3 8:3 0 2T 0 part >>> ??pve-swap 253:0 0 8G 0 lvm [SWAP] >>> ??pve-root 253:1 0 1T 0 lvm / >>> ??pve-data_tmeta 253:2 0 9G 0 lvm >>> ? ??pve-data 253:4 0 949.6G 0 lvm >>> ??pve-data_tdata 253:3 0 949.6G 0 lvm >>> ??pve-data 253:4 0 949.6G 0 lvm >>> sr0 11:0 1 1024M 0 rom >>> >>> My question is: >>> Is it possible to expand sda3 partition later without service outage ? >>> Is it possible to expand pve group on sda3 partition ? >> You don't need to expand sda3 really. You can just create a new >> partition, create a new PV with it and add the new PV to pve VG. >> >>> In case to create a proxmox cluster, what should I do with that 3.5 TB >> free >>> ? >> I don't know really how to reply to this. If you're building a cluster, >> I suggest you configure some kind of shared storage; NFS server or Ceph >> cluster for example. >> >>> Is there a best partition type suited for this ? Can I do it without >>> service outage? >> For what? >> >>> I have not any service running yet , so I can experiment what it takes. >>> Any thought about this would be great. >> Maybe you can start telling us your target use for this server/cluster. >> Also some detailed spec of the server would help; for example does it >> have a RAID card with more than one disk, or you're using a 6TB single >> disk? >> >> Cheers >> Eneko >> >> -- >> Zuzendari Teknikoa / Director T?cnico >> Binovo IT Human Project, S.L. >> Telf. 943569206 >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >> www.binovo.es >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From leandro at tecnetmza.com.ar Fri Feb 28 15:25:43 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Fri, 28 Feb 2020 11:25:43 -0300 Subject: [PVE-User] Create proxmox cluster / storage question. In-Reply-To: <7d5075bd-0442-6902-8296-b4cad572adba@binovo.es> References: <7d5075bd-0442-6902-8296-b4cad572adba@binovo.es> Message-ID: Dear Eneko: When you talk about redundacy and availability, do you want... - HA? (automatic restart of VMs in the other node in case one server fails) - Be able to move VMs from one server to the other "fast" (without copying the disks)? Yes , this is what I want. So regarding my original layout from my 5.5Tb storage: Im using 1T for LVM , 1TB for LVM-thin and 3.5 TB unassigned space, is it ok to use this unassigned space for a ceph ? Can I set it later ? with server on production ? Other: Using an NFS system, means to have an external server running a file sistem? So you should have at least two servers for the cluster and one for the file system? It seems to me that using ceph has a better redundancy plan an it is easier to deploy since I only need two servers. (am i right?). Thanks! El vie., 28 feb. 2020 a las 11:06, Eneko Lacunza () escribi?: > Hola Leandro, > > El 28/2/20 a las 14:43, Leandro Roggerone escribi?: > > Regarding your question , what is the tarjet use for this server. > > I have a dell R610 with 6 drive bays. > > Today I have 4 (2TB) drives in Raid5 , resulting a 5.5TB capacity. > > I will add 2 ssd drives later in raid1 for applications that need more > read > > speed. > > The purpose for this server is to run proxmox with some VMs for external > > and internal access. > > Im planning to build a second server and create a cluster just to have > more > > redundancy and availability. > > > > I would like to set all I can now that server is not in production and > > minimize risk later. > > Thats why im asking so many questions. > Asking is good, but we need info to be able to help you ;) > > When you talk about redundacy and availability, do you want... > - HA? (automatic restart of VMs in the other node in case one server fails) > - Be able to move VMs from one server to the other "fast" (without > copying the disks)? > > If your answer is yes to any of the previous questions, you have to look > at using a NFS server or deploying Ceph. > > If it's no, then we can talk about local storage in your servers. What > RAID card do you have in that server? Does it have write cache (non > volatile of battery-backed) If it doesn't have such, RAID5 could prove > slow (eat quite CPU), I suggest you use 2xRAID1 or a RAID10 setup. Also, > please bear in mind that RAID5 with "big" disks is considered quite > unsecure (risk of having a second disk failure during recovery is high). > > Saludos > Eneko > > Regards. > > Leandro. > > > > > > El vie., 28 feb. 2020 a las 5:49, Eneko Lacunza () > > escribi?: > > > >> Hola Leandro, > >> > >> El 27/2/20 a las 17:29, Leandro Roggerone escribi?: > >>> Hi guys , i'm still tunning my 5.5 Tb server. > >>> While setting storage options during install process, I set 2000 for hd > >>> size, so I have 3.5 TB free to assign later. > >>> > >>> my layout is as follows: > >>> root at pve:~# lsblk > >>> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > >>> sda 8:0 0 5.5T 0 disk > >>> ??sda1 8:1 0 1007K 0 part > >>> ??sda2 8:2 0 512M 0 part > >>> ??sda3 8:3 0 2T 0 part > >>> ??pve-swap 253:0 0 8G 0 lvm [SWAP] > >>> ??pve-root 253:1 0 1T 0 lvm / > >>> ??pve-data_tmeta 253:2 0 9G 0 lvm > >>> ? ??pve-data 253:4 0 949.6G 0 lvm > >>> ??pve-data_tdata 253:3 0 949.6G 0 lvm > >>> ??pve-data 253:4 0 949.6G 0 lvm > >>> sr0 11:0 1 1024M 0 rom > >>> > >>> My question is: > >>> Is it possible to expand sda3 partition later without service outage ? > >>> Is it possible to expand pve group on sda3 partition ? > >> You don't need to expand sda3 really. You can just create a new > >> partition, create a new PV with it and add the new PV to pve VG. > >> > >>> In case to create a proxmox cluster, what should I do with that 3.5 TB > >> free > >>> ? > >> I don't know really how to reply to this. If you're building a cluster, > >> I suggest you configure some kind of shared storage; NFS server or Ceph > >> cluster for example. > >> > >>> Is there a best partition type suited for this ? Can I do it without > >>> service outage? > >> For what? > >> > >>> I have not any service running yet , so I can experiment what it takes. > >>> Any thought about this would be great. > >> Maybe you can start telling us your target use for this server/cluster. > >> Also some detailed spec of the server would help; for example does it > >> have a RAID card with more than one disk, or you're using a 6TB single > >> disk? > >> > >> Cheers > >> Eneko > >> > >> -- > >> Zuzendari Teknikoa / Director T?cnico > >> Binovo IT Human Project, S.L. > >> Telf. 943569206 > >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >> www.binovo.es > >> > >> _______________________________________________ > >> pve-user mailing list > >> pve-user at pve.proxmox.com > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > -- > Zuzendari Teknikoa / Director T?cnico > Binovo IT Human Project, S.L. > Telf. 943569206 > Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > www.binovo.es > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From alarig at grifon.fr Fri Feb 28 20:20:47 2020 From: alarig at grifon.fr (Alarig Le Lay) Date: Fri, 28 Feb 2020 20:20:47 +0100 Subject: [PVE-User] Building pve-kernel fails Message-ID: <20200228192047.j6uleehiip2n3wxp@mew.swordarmor.fr> Hi, I would like to test (and integrate) a patch to local kernels, but if I try to build it, it fails: alarig at pikachu | master *%= pve-kernel % make test -f "submodules/ubuntu-eoan/README" || git submodule update --init submodules/ubuntu-eoan test -f "submodules/zfsonlinux/Makefile" || git submodule update --init --recursive submodules/zfsonlinux rm -rf build/modules/pkg-zfs build/modules/tmp pkg-zfs.prepared mkdir -p build/modules/tmp cp -a submodules/zfsonlinux/* build/modules/tmp cd build/modules/tmp; make kernel make[1]: Entering directory '/home/alarig/Documents/git/proxmox.com/pve-kernel/build/modules/tmp' make[1]: *** No rule to make target 'kernel'. Stop. make[1]: Leaving directory '/home/alarig/Documents/git/proxmox.com/pve-kernel/build/modules/tmp' make: *** [Makefile:99: pkg-zfs.prepared] Error 2 zsh: exit 2 make build/modules/tmp doesn?t have any Makefile: alarig at pikachu | master *%= pve-kernel % ls -lh build/modules/tmp total 0 drwxr-xr-x 1 alarig alarig 454 Feb 28 19:01 upstream Neither do the `upstream` directory: alarig at pikachu | master *%= pve-kernel % ls -lh build/modules/tmp/upstream/ total 100K -rw-r--r-- 1 alarig alarig 13K Feb 28 19:01 AUTHORS -rwxr-xr-x 1 alarig alarig 59 Feb 28 19:01 autogen.sh drwxr-xr-x 1 alarig alarig 278 Feb 28 19:01 cmd -rw-r--r-- 1 alarig alarig 154 Feb 28 19:01 CODE_OF_CONDUCT.md drwxr-xr-x 1 alarig alarig 6.1K Feb 28 19:01 config -rw-r--r-- 1 alarig alarig 15K Feb 28 19:01 configure.ac drwxr-xr-x 1 alarig alarig 102 Feb 28 19:01 contrib -rwxr-xr-x 1 alarig alarig 2.5K Feb 28 19:01 copy-builtin -rw-r--r-- 1 alarig alarig 1.3K Feb 28 19:01 COPYRIGHT drwxr-xr-x 1 alarig alarig 100 Feb 28 19:01 etc drwxr-xr-x 1 alarig alarig 444 Feb 28 19:01 include drwxr-xr-x 1 alarig alarig 222 Feb 28 19:01 lib -rw-r--r-- 1 alarig alarig 19K Feb 28 19:01 LICENSE -rw-r--r-- 1 alarig alarig 5.5K Feb 28 19:01 Makefile.am drwxr-xr-x 1 alarig alarig 46 Feb 28 19:01 man -rw-r--r-- 1 alarig alarig 208 Feb 28 19:01 META drwxr-xr-x 1 alarig alarig 112 Feb 28 19:01 module -rw-r--r-- 1 alarig alarig 97 Feb 28 19:01 NEWS -rw-r--r-- 1 alarig alarig 1.2K Feb 28 19:01 NOTICE -rw-r--r-- 1 alarig alarig 1.2K Feb 28 19:01 README.md drwxr-xr-x 1 alarig alarig 48 Feb 28 19:01 rpm drwxr-xr-x 1 alarig alarig 470 Feb 28 19:01 scripts -rw-r--r-- 1 alarig alarig 2.4K Feb 28 19:01 TEST drwxr-xr-x 1 alarig alarig 96 Feb 28 19:01 tests drwxr-xr-x 1 alarig alarig 36 Feb 28 19:01 udev -rw-r--r-- 1 alarig alarig 38 Feb 28 19:01 zfs.release.in Also, I had to update URLs in the git config because the shipped ones are giving 404 (they don?t have the .git): alarig at pikachu | master *%= pve-kernel % cat .git/config [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true [remote "origin"] url = https://git.proxmox.com/git/pve-kernel.git fetch = +refs/heads/*:refs/remotes/origin/* [branch "master"] remote = origin merge = refs/heads/master [submodule "submodules/ubuntu-eoan"] active = true url = https://git.proxmox.com/git/mirror_ubuntu-eoan-kernel.git [submodule "submodules/zfsonlinux"] active = true url = https://git.proxmox.com/git/zfsonlinux.git Is there an updated Makefile for pve-kernel or is there anything to do before the make? Regards, -- Alarig Le lay From lists at merit.unu.edu Sat Feb 29 12:21:05 2020 From: lists at merit.unu.edu (mj) Date: Sat, 29 Feb 2020 12:21:05 +0100 Subject: [PVE-User] osd replacement to bluestore or filestore Message-ID: <44474a04-596f-52db-c242-9c85053b197a@merit.unu.edu> Hi, We have a failing filestore OSD HDD in our pve 5.4 cluster on ceph 12.2.13. I have ordered a replacement SSD, but we have the following doubt: Should we now replace the filestore HDD (journal on an SSD) with a bluestore SSD? Or should we keep the new SSD in filestore config, in order to minimise the time we run in 'mixed mode'. We have the intention to replace all filestore HDD OSDs with bluestore SSDs, but not short term, starting in half year or so. So the question is really: can we run mixed bluestore/filestore ceph cluster for an extended period of time? Anything particular to consider? MJ