From adrian at apetre.sc Mon Dec 2 23:18:59 2019 From: adrian at apetre.sc (Adrian Petrescu) Date: Mon, 2 Dec 2019 17:18:59 -0500 Subject: [PVE-User] VMs created in rapid succession are assigned the same IPv4 address Message-ID: <20191202221859.sx2jrltvwe2n2fux@halmos.localdomain> Hey all, I have a pretty intriguing issue. I'm spinning up VMs through a Terraform provider(https://github.com/Telmate/terraform-provider-proxmox if it matters), which goes through the /api2/json endpoints. They are all full clones of a simple ubuntu1804 template. Everything is working just fine when I spin them up one at a time. The VMs are all just using a simple vmbr0 bridge with CIDR 192.168.128.207/16. However, if I use `count = N` (with N > 1) to create multiple VMs "at once" (I'm using scare quotes because they are still just individual calls to `POST /api2/json/nodes//qemu//clone` being fired off in rapid succession), then once everything comes up, I find that all the VMs in that batch were assigned the same IPv4 address, which makes all but one of them inaccessible. The IPv6 address is different, the MAC addresses are different, and if I reboot the VM, the IPv4 address gets reassigned to something unique as well, so it's not as if the parameterization is somehow forcing it. If I slow the calls down and make them one at a time, everything is fine. So it really does seem like the DHCP server has some sort of strange race condition that ignores the MAC. But surely any reasonable DHCP implementation can deal with such a basic case, so I must be missing something. If it helps, below is an example of 3 consecutive clones to the API which produced this outcome (with Cookie headers redacted). Any ideas what I might be doing wrong? Please let me know if I'm leaving out any other details that might be important :) ===================================================================== 2019-12-02T16:10:10.057-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019/12/02 16:10:10 >>>>>>>>>> REQUEST: 2019-12-02T16:10:10.057-0500 [DEBUG] plugin.terraform-provider-proxmox: POST /api2/json/nodes/dilithium/qemu/34000/clone HTTP/1.1 2019-12-02T16:10:10.057-0500 [DEBUG] plugin.terraform-provider-proxmox: Host: 192.168.128.207:8006 2019-12-02T16:10:10.057-0500 [DEBUG] plugin.terraform-provider-proxmox: User-Agent: Go-http-client/1.1 2019-12-02T16:10:10.057-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Length: 79 2019-12-02T16:10:10.057-0500 [DEBUG] plugin.terraform-provider-proxmox: Accept: application/json 2019-12-02T16:10:10.057-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Type: application/x-www-form-urlencoded 2019-12-02T16:10:10.057-0500 [DEBUG] plugin.terraform-provider-proxmox: Accept-Encoding: gzip 2019-12-02T16:10:10.057-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019-12-02T16:10:10.057-0500 [DEBUG] plugin.terraform-provider-proxmox: full=1&name=k8s-worker-5&newid=107&pool=k8s-pool&storage=local&target=dilithium 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019/12/02 16:10:10 <<<<<<<<<< RESULT: 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: HTTP/1.1 200 OK 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: Connection: close 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Length: 76 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: Cache-Control: max-age=0 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Type: application/json;charset=UTF-8 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: Date: Mon, 02 Dec 2019 21:10:10 GMT 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: Expires: Mon, 02 Dec 2019 21:10:10 GMT 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: Pragma: no-cache 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: Server: pve-api-daemon/3.0 2019-12-02T16:10:10.087-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019-12-02T16:10:10.088-0500 [DEBUG] plugin.terraform-provider-proxmox: {"data":"UPID:dilithium:000051EF:000B902C:5DE57DB2:qmclone:34000:root at pam:"} 2019-12-02T16:10:10.088-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019/12/02 16:10:10 >>>>>>>>>> REQUEST: 2019-12-02T16:10:10.088-0500 [DEBUG] plugin.terraform-provider-proxmox: GET /api2/json/nodes/dilithium/tasks/UPID:dilithium:000051EF:000B902C:5DE57DB2:qmclone:34000:root at pam:/status HTTP/1.1 2019-12-02T16:10:10.088-0500 [DEBUG] plugin.terraform-provider-proxmox: Host: 192.168.128.207:8006 2019-12-02T16:10:10.088-0500 [DEBUG] plugin.terraform-provider-proxmox: User-Agent: Go-http-client/1.1 2019-12-02T16:10:10.088-0500 [DEBUG] plugin.terraform-provider-proxmox: Accept: application/json 2019-12-02T16:10:10.088-0500 [DEBUG] plugin.terraform-provider-proxmox: Accept-Encoding: gzip 2019-12-02T16:10:10.088-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019/12/02 16:10:10 <<<<<<<<<< RESULT: 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: HTTP/1.1 200 OK 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Length: 2334 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Cache-Control: max-age=0 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Connection: Keep-Alive 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Type: application/json;charset=UTF-8 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Date: Mon, 02 Dec 2019 21:10:10 GMT 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Expires: Mon, 02 Dec 2019 21:10:10 GMT 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Pragma: no-cache 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Server: pve-api-daemon/3.0 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: {"data":[{"diskwrite":0,"template":1,"netout":0,"cpu":0,"mem":0,"maxcpu":2,"maxdisk":8589934592,"node":"dilithium","id":"qemu/34000","maxmem":1073741824,"name":"ubuntu1804-tmpl","type":"qemu","diskread":0,"uptime":0,"status":"stopped","netin":0,"disk":0,"vmid":34000},{"node":"dilithium","maxdisk":4294967296,"maxcpu":1,"mem":0,"cpu":0,"id":"qemu/106","netout":0,"diskwrite":0,"template":0,"name":"testvm","maxmem":2147483648,"type":"qemu","diskread":0,"vmid":106,"disk":0,"netin":0,"status":"stopped","uptime":0},{"id":"qemu/107","lock":"clone","status":"unknown","type":"qemu","node":"dilithium","vmid":107},{"lock":"clone","id":"qemu/105","node":"dilithium","vmid":105,"status":"unknown","type":"qemu"},{"maxmem":68719476736,"name":"k8s-worker-1","diskwrite":360395776,"netout":117805,"template":0,"id":"qemu/101","maxdisk":8589934592,"node":"dilithium","mem":733495296,"cpu":0.0018635563792405,"maxcpu":4,"uptime":1317,"pool":"k8s-pool","vmid":101,"netin":11964331,"disk":0,"status":"running","diskread":511095448,"type":"qemu"},{"status":"running","netin":7005759,"disk":0,"vmid":103,"pool":"k8s-pool","uptime":1065,"type":"qemu","diskread":503227032,"name":"k8s-worker-4","maxmem":68719476736,"mem":684974080,"cpu":0.0045257797781555,"maxcpu":4,"maxdisk":8589934592,"node":"dilithium","id":"qemu/103","template":0,"diskwrite":268002304,"netout":45075},{"status":"running","netin":11849412,"disk":0,"vmid":104,"pool":"k8s-pool","uptime":1067,"type":"qemu","diskread":501944984,"name":"k8s-worker-3","maxmem":68719476736,"mem":708276224,"cpu":0.001597334039349,"maxcpu":4,"node":"dilithium","maxdisk":8589934592,"id":"qemu/104","netout":88985,"diskwrite":299877376,"template":0},{"maxmem":68719476736,"name":"k8s-worker-2","template":0,"diskwrite":300356608,"netout":79704,"id":"qemu/102","node":"dilithium","maxdisk":8589934592,"mem":709390336,"cpu":0.002129778719132,"maxcpu":4,"uptime":1317,"pool":"k8s-pool","vmid":102,"netin":12080125,"status":"running","disk":0,"diskread":501949080,"type":"qemu"},{"diskwrite":301437952,"template":0,"netout":89181,"cpu":0.002129778719132,"mem":729665536,"maxcpu":4,"node":"dilithium","maxdisk":8589934592,"id":"qemu/100","maxmem":68719476736,"name":"k8s-worker-0","type":"qemu","diskread":512447128,"pool":"k8s-pool","uptime":2211,"disk":0,"netin":13768347,"status":"running","vmid":100}]} 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019/12/02 16:10:10 [DEBUG] cloning VM 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019/12/02 16:10:10 >>>>>>>>>> REQUEST: 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: POST /api2/json/nodes/dilithium/qemu/34000/clone HTTP/1.1 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Host: 192.168.128.207:8006 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: User-Agent: Go-http-client/1.1 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Length: 79 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Accept: application/json 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Type: application/x-www-form-urlencoded 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Accept-Encoding: gzip 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: full=1&name=k8s-worker-7&newid=108&pool=k8s-pool&storage=local&target=dilithium 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019/12/02 16:10:10 <<<<<<<<<< RESULT: 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: HTTP/1.1 200 OK 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Length: 222 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Cache-Control: max-age=0 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Connection: Keep-Alive 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Connection: Keep-Alive 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Type: application/json;charset=UTF-8 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Date: Mon, 02 Dec 2019 21:10:10 GMT 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Expires: Mon, 02 Dec 2019 21:10:10 GMT 2019-12-02T16:10:10.110-0500 [DEBUG] plugin.terraform-provider-proxmox: Pragma: no-cache 2019-12-02T16:10:10.110-0500 [DEBUG] plugin.terraform-provider-proxmox: Server: pve-api-daemon/3.0 2019-12-02T16:10:10.110-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019-12-02T16:10:10.110-0500 [DEBUG] plugin.terraform-provider-proxmox: {"data":{"upid":"UPID:dilithium:000051EF:000B902C:5DE57DB2:qmclone:34000:root at pam:","node":"dilithium","id":"34000","pid":20975,"type":"qmclone","user":"root at pam","status":"running","pstart":757804,"starttime":1575321010}} 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019/12/02 16:10:10 >>>>>>>>>> REQUEST: 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: POST /api2/json/nodes/dilithium/qemu/34000/clone HTTP/1.1 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Host: 192.168.128.207:8006 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: User-Agent: Go-http-client/1.1 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Length: 79 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Accept: application/json 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Type: application/x-www-form-urlencoded 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: Accept-Encoding: gzip 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019-12-02T16:10:10.105-0500 [DEBUG] plugin.terraform-provider-proxmox: full=1&name=k8s-worker-7&newid=108&pool=k8s-pool&storage=local&target=dilithium 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019/12/02 16:10:10 <<<<<<<<<< RESULT: 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: HTTP/1.1 200 OK 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Length: 222 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Cache-Control: max-age=0 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Connection: Keep-Alive 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Connection: Keep-Alive 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Content-Type: application/json;charset=UTF-8 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Date: Mon, 02 Dec 2019 21:10:10 GMT 2019-12-02T16:10:10.109-0500 [DEBUG] plugin.terraform-provider-proxmox: Expires: Mon, 02 Dec 2019 21:10:10 GMT 2019-12-02T16:10:10.110-0500 [DEBUG] plugin.terraform-provider-proxmox: Pragma: no-cache 2019-12-02T16:10:10.110-0500 [DEBUG] plugin.terraform-provider-proxmox: Server: pve-api-daemon/3.0 2019-12-02T16:10:10.110-0500 [DEBUG] plugin.terraform-provider-proxmox: 2019-12-02T16:10:10.110-0500 [DEBUG] plugin.terraform-provider-proxmox: {"data":{"upid":"UPID:dilithium:000051EF:000B902C:5DE57DB2:qmclone:34000:root at pam:","node":"dilithium","id":"34000","pid":20975,"type":"qmclone","user":"root at pam","status":"running","pstart":757804,"starttime":1575321010}} -- Adrian From t.lamprecht at proxmox.com Tue Dec 3 06:19:50 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Tue, 3 Dec 2019 06:19:50 +0100 Subject: [PVE-User] VMs created in rapid succession are assigned the same IPv4 address In-Reply-To: <20191202221859.sx2jrltvwe2n2fux@halmos.localdomain> References: <20191202221859.sx2jrltvwe2n2fux@halmos.localdomain> Message-ID: Hey, On 12/2/19 11:18 PM, Adrian Petrescu wrote: > Hey all, I have a pretty intriguing issue. > > I'm spinning up VMs through a Terraform provider(https://github.com/Telmate/terraform-provider-proxmox > if it matters), which goes through the /api2/json endpoints. They are > all full clones of a simple ubuntu1804 template. Everything is working > just fine when I spin them up one at a time. The VMs are all just using > a simple vmbr0 bridge with CIDR 192.168.128.207/16. > > However, if I use `count = N` (with N > 1) to create multiple VMs "at > once" (I'm using scare quotes because they are still just individual > calls to `POST /api2/json/nodes//qemu//clone` being fired off > in rapid succession), then once everything comes up, I find that all the > VMs in that batch were assigned the same IPv4 address, which makes all > but one of them inaccessible. > The IPv6 address is different, the MAC addresses are different, and if I > reboot the VM, the IPv4 address gets reassigned to something unique as > well, so it's not as if the parameterization is somehow forcing it. If I > slow the calls down and make them one at a time, everything is fine. So > it really does seem like the DHCP server has some sort of strange race > condition that ignores the MAC. But surely any reasonable DHCP > implementation can deal with such a basic case, so I must be missing > something. This really sounds like a bad DHCP, it's the one assigning the same IP multiple times, which it really shouldn't. What DHCP server do you use? From gilberto.nunes32 at gmail.com Tue Dec 3 20:27:30 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Tue, 3 Dec 2019 16:27:30 -0300 Subject: [PVE-User] Migrate direct from Xen to iSCSI LUN Message-ID: Hi there. I'm looking for some insight about how to migrate VM from Citrix direct to iSCSI LUN. I have a Dell Storage MD3200i so the only way is to work direct with LUNs... Or perhaps someone can point me in the right direction. I suppose I can use Clonezilla to migrate on-the-fly to a VM which have it's disk lay in iSCSI LUN.... Any advice would be nice? --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From martin at proxmox.com Wed Dec 4 10:38:34 2019 From: martin at proxmox.com (Martin Maurer) Date: Wed, 4 Dec 2019 10:38:34 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! Message-ID: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> Hi all, We are very excited to announce the general availability of Proxmox VE 6.1. It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies. This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it?s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F. The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown. In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1. We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes. Release notes https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.1 Video intro https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-6-1 Download https://www.proxmox.com/en/downloads Alternate ISO download: http://download.proxmox.com/iso/ Documentation https://pve.proxmox.com/pve-docs/ Community Forum https://forum.proxmox.com Source Code https://git.proxmox.com Bugtracker https://bugzilla.proxmox.com FAQ Q: Can I dist-upgrade Proxmox VE 6.0 to 6.1 with apt? A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade Q: Can I install Proxmox VE 6.1 on top of Debian Buster? A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.x and higher with Ceph Nautilus? A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, please follow exactly the upgrade documentation. https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus Q: Where can I get more information about future feature updates? A: Check our roadmap, forum, mailing list and subscribe to our newsletter. A big THANK YOU to our active community for all your feedback, testing, bug reporting and patch submitting! -- Best Regards, Martin Maurer Proxmox VE project leader martin at proxmox.com https://www.proxmox.com From contact+dev at gilouweb.com Wed Dec 4 12:11:46 2019 From: contact+dev at gilouweb.com (Gilles Pietri) Date: Wed, 4 Dec 2019 12:11:46 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> Message-ID: <4115c069-7c0c-7efc-dd57-bf2c67d7a86a@gilouweb.com> Le 04/12/2019 ? 10:38, Martin Maurer a ?crit?: > Hi all, > > We are very excited to announce the general availability of Proxmox VE 6.1. > > It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies. > > This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it?s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F. > > The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown. > > In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1. > > We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes. Hi! This is amazing, thanks a lot for your work, it is appreciated. Proxmox truly is a wonderful project and both the community and the company behind it deserves every thanks and support for their nice work. Setting up the update test bed right now! Regards, Gilles -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From devzero at web.de Wed Dec 4 15:33:39 2019 From: devzero at web.de (Roland @web.de) Date: Wed, 4 Dec 2019 15:33:39 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: <4115c069-7c0c-7efc-dd57-bf2c67d7a86a@gilouweb.com> References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> <4115c069-7c0c-7efc-dd57-bf2c67d7a86a@gilouweb.com> Message-ID: <9da2b503-0cfd-379f-f443-8fdefb86e987@web.de> thanks for making proxmox! unfortunatly i cannot install it on fujitsu rx300 s6, mouse/keyboard won't work in installer screen anymore. 6.0 and before works without problems. will test on other machine soon. maybe someone has similar issues....or has a clue how to workaround roland Am 04.12.19 um 12:11 schrieb Gilles Pietri: > Le 04/12/2019 ? 10:38, Martin Maurer a ?crit?: >> Hi all, >> >> We are very excited to announce the general availability of Proxmox VE 6.1. >> >> It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies. >> >> This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it?s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F. >> >> The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown. >> >> In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1. >> >> We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes. > Hi! > > This is amazing, thanks a lot for your work, it is appreciated. Proxmox > truly is a wonderful project and both the community and the company > behind it deserves every thanks and support for their nice work. > > Setting up the update test bed right now! > > Regards, > Gilles > > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From olivier.benghozi at wifirst.fr Wed Dec 4 15:36:36 2019 From: olivier.benghozi at wifirst.fr (Olivier Benghozi) Date: Wed, 4 Dec 2019 15:36:36 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: <9da2b503-0cfd-379f-f443-8fdefb86e987@web.de> References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> <4115c069-7c0c-7efc-dd57-bf2c67d7a86a@gilouweb.com> <9da2b503-0cfd-379f-f443-8fdefb86e987@web.de> Message-ID: Hi, I suggest you should just leave appart the proxmox iso installer. Had only problems with it. Use debian_buster installer then install proxmox using apt. Works fine. > Le 4 d?c. 2019 ? 15:33, Roland @web.de a ?crit : > > thanks for making proxmox! > > unfortunatly i cannot install it on fujitsu rx300 s6, mouse/keyboard > won't work in installer screen anymore. 6.0 and before works without > problems. > > will test on other machine soon. > > maybe someone has similar issues....or has a clue how to workaround > > roland > > > Am 04.12.19 um 12:11 schrieb Gilles Pietri: > >> Le 04/12/2019 ? 10:38, Martin Maurer a ?crit : >>> Hi all, >>> >>> We are very excited to announce the general availability of Proxmox VE 6.1. >>> >>> It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies. >>> >>> This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it?s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F. >>> >>> The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown. >>> >>> In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1. >>> >>> We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes. >> Hi! >> >> This is amazing, thanks a lot for your work, it is appreciated. Proxmox >> truly is a wonderful project and both the community and the company >> behind it deserves every thanks and support for their nice work. >> >> Setting up the update test bed right now! >> >> Regards, >> Gilles >> >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From t.lamprecht at proxmox.com Wed Dec 4 16:13:36 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Wed, 4 Dec 2019 16:13:36 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: <9da2b503-0cfd-379f-f443-8fdefb86e987@web.de> References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> <4115c069-7c0c-7efc-dd57-bf2c67d7a86a@gilouweb.com> <9da2b503-0cfd-379f-f443-8fdefb86e987@web.de> Message-ID: <50a13b3b-b0b9-1a8d-a84c-89cbab543ce1@proxmox.com> On 12/4/19 3:33 PM, Roland @web.de wrote: > thanks for making proxmox! > > unfortunatly i cannot install it on fujitsu rx300 s6, mouse/keyboard > won't work in installer screen anymore. 6.0 and before works without > problems. We try hard to fix Installer issues, so more information would be great. I guess it still works in the grub menu? (just to be sure) If you select "Debug Mode", does it works there too (initially)? As it worked with 6.0 and before problems could come from a regression with the 5.3 kernel. We now also include more drivers, which actually fixed an issue for the Mailgateway (re-uses installer) and keyboard input under Hyper-V.. regards, Thomas > > will test on other machine soon. > > maybe someone has similar issues....or has a clue how to workaround > > roland > > > Am 04.12.19 um 12:11 schrieb Gilles Pietri: > >> Le 04/12/2019 ? 10:38, Martin Maurer a ?crit?: >>> Hi all, >>> >>> We are very excited to announce the general availability of Proxmox VE 6.1. >>> >>> It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies. >>> >>> This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it?s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F. >>> >>> The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown. >>> >>> In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1. >>> >>> We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes. >> Hi! >> >> This is amazing, thanks a lot for your work, it is appreciated. Proxmox >> truly is a wonderful project and both the community and the company >> behind it deserves every thanks and support for their nice work. >> >> Setting up the update test bed right now! >> >> Regards, >> Gilles From t.lamprecht at proxmox.com Wed Dec 4 16:16:32 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Wed, 4 Dec 2019 16:16:32 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> <4115c069-7c0c-7efc-dd57-bf2c67d7a86a@gilouweb.com> <9da2b503-0cfd-379f-f443-8fdefb86e987@web.de> Message-ID: <2d1971be-75d9-422a-db09-4b352ecdf6a3@proxmox.com> Hi, On 12/4/19 3:36 PM, Olivier Benghozi wrote: > I suggest you should just leave appart the proxmox iso installer. Had only problems with it. We did tens of installations test just this week on many different HW, combinations. Not a single one were it didn't worked here. We'd be happy if you share your specific problems and HW, maybe on https://bugzilla.proxmox.com/ maybe we can take a look at them. cheers, Thomas From venefax at gmail.com Wed Dec 4 16:58:34 2019 From: venefax at gmail.com (Saint Michael) Date: Wed, 4 Dec 2019 10:58:34 -0500 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: <2d1971be-75d9-422a-db09-4b352ecdf6a3@proxmox.com> References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> <4115c069-7c0c-7efc-dd57-bf2c67d7a86a@gilouweb.com> <9da2b503-0cfd-379f-f443-8fdefb86e987@web.de> <2d1971be-75d9-422a-db09-4b352ecdf6a3@proxmox.com> Message-ID: simple question: how do I know what exact version I have using the command line? How do I upgrade the later major version, 5.X to the current version from the command line? Philip On Wed, Dec 4, 2019 at 10:16 AM Thomas Lamprecht wrote: > Hi, > > On 12/4/19 3:36 PM, Olivier Benghozi wrote: > > I suggest you should just leave appart the proxmox iso installer. Had > only problems with it. > > We did tens of installations test just this week on many different HW, > combinations. Not a single one were it didn't worked here. > > We'd be happy if you share your specific problems and HW, maybe on > https://bugzilla.proxmox.com/ maybe we can take a look at them. > > cheers, > Thomas > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From s.reiter at proxmox.com Wed Dec 4 17:19:54 2019 From: s.reiter at proxmox.com (Stefan Reiter) Date: Wed, 4 Dec 2019 17:19:54 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> <4115c069-7c0c-7efc-dd57-bf2c67d7a86a@gilouweb.com> <9da2b503-0cfd-379f-f443-8fdefb86e987@web.de> <2d1971be-75d9-422a-db09-4b352ecdf6a3@proxmox.com> Message-ID: <6c1d53a7-0cfa-ec8d-360b-19ede4f88d38@proxmox.com> Hi! On 12/4/19 4:58 PM, Saint Michael wrote: > simple question: how do I know what exact version I have using the command > line? 'pveversion' or 'pveversion -v' > How do I upgrade the later major version, 5.X to the current version from > the command line? From 6.0 to 6.1, a simple 'apt update' and 'apt full-upgrade' are enough, to migrate from 5.x to 6.1 you can follow our wiki: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 Hope that helps! Stefan > Philip > > On Wed, Dec 4, 2019 at 10:16 AM Thomas Lamprecht > wrote: > >> Hi, >> >> On 12/4/19 3:36 PM, Olivier Benghozi wrote: >>> I suggest you should just leave appart the proxmox iso installer. Had >> only problems with it. >> >> We did tens of installations test just this week on many different HW, >> combinations. Not a single one were it didn't worked here. >> >> We'd be happy if you share your specific problems and HW, maybe on >> https://bugzilla.proxmox.com/ maybe we can take a look at them. >> >> cheers, >> Thomas >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From uwe.sauter.de at gmail.com Wed Dec 4 23:17:00 2019 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Wed, 4 Dec 2019 23:17:00 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> Message-ID: <9d301eb5-743d-fb85-5197-feb6078e42b5@gmail.com> Hi, upgraded a cluster of three servers to 6.1. Currently I'm in the process of rebooting them one after the other. When trying to migrate VMs to a host that was already rebooted I get the following in the task viewer window in the web ui: Check VM 109: precondition check passed Migrating VM 109 Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. trying to acquire lock... OK Check VM 200: precondition check passed Migrating VM 200 Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. Check VM 203: precondition check passed Migrating VM 203 Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. Check VM 204: precondition check passed Migrating VM 204 Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. Check VM 205: precondition check passed Migrating VM 205 Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. All jobs finished, used 5 workers in total. TASK OK Hope this is just cosmetic? Regards, Uwe Am 04.12.19 um 10:38 schrieb Martin Maurer: > Hi all, > > We are very excited to announce the general availability of Proxmox VE 6.1. > > It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies. > > This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it?s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F. > > The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown. > > In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1. > > We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes. > > Release notes > https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.1 > > Video intro > https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-6-1 > > Download > https://www.proxmox.com/en/downloads > Alternate ISO download: > http://download.proxmox.com/iso/ > > Documentation > https://pve.proxmox.com/pve-docs/ > > Community Forum > https://forum.proxmox.com > > Source Code > https://git.proxmox.com > > Bugtracker > https://bugzilla.proxmox.com > > FAQ > Q: Can I dist-upgrade Proxmox VE 6.0 to 6.1 with apt? > A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade > > Q: Can I install Proxmox VE 6.1 on top of Debian Buster? > A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster > > Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.x and higher with Ceph Nautilus? > A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, please follow exactly the upgrade documentation. > https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 > https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus > > Q: Where can I get more information about future feature updates? > A: Check our roadmap, forum, mailing list and subscribe to our newsletter. > > A big THANK YOU to our active community for all your feedback, testing, bug reporting and patch submitting! > From t.lamprecht at proxmox.com Thu Dec 5 07:58:03 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Thu, 5 Dec 2019 07:58:03 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: <9d301eb5-743d-fb85-5197-feb6078e42b5@gmail.com> References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> <9d301eb5-743d-fb85-5197-feb6078e42b5@gmail.com> Message-ID: Hi, On 12/4/19 11:17 PM, Uwe Sauter wrote: > Hi, > > upgraded a cluster of three servers to 6.1. Currently I'm in the process of rebooting them one after the other. > Upgrade from 5.4 to 6.1 or from 6.0 to 6.1 ? > When trying to migrate VMs to a host that was already rebooted I get the following in the task viewer window in the web ui: > > Check VM 109: precondition check passed > Migrating VM 109 > Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. > trying to acquire lock... > ?OK > Check VM 200: precondition check passed > Migrating VM 200 > Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. > Check VM 203: precondition check passed > Migrating VM 203 > Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. > Check VM 204: precondition check passed > Migrating VM 204 > Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. > Check VM 205: precondition check passed > Migrating VM 205 > Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. > All jobs finished, used 5 workers in total. > TASK OK > > > Hope this is just cosmetic? > It is, but I'm wondering why you get this.. Migration was just started normally through the webinterface, or? regards, Thomas > > Regards, > > ????Uwe > > > > Am 04.12.19 um 10:38 schrieb Martin Maurer: >> Hi all, >> >> We are very excited to announce the general availability of Proxmox VE 6.1. >> >> It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies. >> >> This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it?s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F. >> >> The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown. >> >> In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1. >> >> We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes. >> >> Release notes >> https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.1 >> >> Video intro >> https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-6-1 >> >> Download >> https://www.proxmox.com/en/downloads >> Alternate ISO download: >> http://download.proxmox.com/iso/ >> >> Documentation >> https://pve.proxmox.com/pve-docs/ >> >> Community Forum >> https://forum.proxmox.com >> >> Source Code >> https://git.proxmox.com >> >> Bugtracker >> https://bugzilla.proxmox.com >> >> FAQ >> Q: Can I dist-upgrade Proxmox VE 6.0 to 6.1 with apt? >> A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade >> >> Q: Can I install Proxmox VE 6.1 on top of Debian Buster? >> A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster >> >> Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.x and higher with Ceph Nautilus? >> A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, please follow exactly the upgrade documentation. >> https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 >> https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus >> >> Q: Where can I get more information about future feature updates? >> A: Check our roadmap, forum, mailing list and subscribe to our newsletter. >> >> A big THANK YOU to our active community for all your feedback, testing, bug reporting and patch submitting! >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From uwe.sauter.de at gmail.com Thu Dec 5 08:47:49 2019 From: uwe.sauter.de at gmail.com (Uwe Sauter) Date: Thu, 5 Dec 2019 08:47:49 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> <9d301eb5-743d-fb85-5197-feb6078e42b5@gmail.com> Message-ID: <56e377c9-a052-25e4-7869-76786f6fb38e@gmail.com> Am 05.12.19 um 07:58 schrieb Thomas Lamprecht: > Hi, > > On 12/4/19 11:17 PM, Uwe Sauter wrote: >> Hi, >> >> upgraded a cluster of three servers to 6.1. Currently I'm in the process of rebooting them one after the other. >> > > Upgrade from 5.4 to 6.1 or from 6.0 to 6.1 ? 6.0 to 6.1 > >> When trying to migrate VMs to a host that was already rebooted I get the following in the task viewer window in the web ui: >> >> Check VM 109: precondition check passed >> Migrating VM 109 >> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. >> trying to acquire lock... >> ?OK >> Check VM 200: precondition check passed >> Migrating VM 200 >> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. >> Check VM 203: precondition check passed >> Migrating VM 203 >> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. >> Check VM 204: precondition check passed >> Migrating VM 204 >> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. >> Check VM 205: precondition check passed >> Migrating VM 205 >> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. >> All jobs finished, used 5 workers in total. >> TASK OK >> >> >> Hope this is just cosmetic? >> > > It is, but I'm wondering why you get this.. Migration was just started normally > through the webinterface, or? I selected the server on the left, then bulk actions, migrate, all running VMs, chose the target host and started migration. Regards, Uwe > > regards, > Thomas > >> >> Regards, >> >> ????Uwe >> >> >> >> Am 04.12.19 um 10:38 schrieb Martin Maurer: >>> Hi all, >>> >>> We are very excited to announce the general availability of Proxmox VE 6.1. >>> >>> It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies. >>> >>> This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it?s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F. >>> >>> The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown. >>> >>> In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1. >>> >>> We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes. >>> >>> Release notes >>> https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.1 >>> >>> Video intro >>> https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-6-1 >>> >>> Download >>> https://www.proxmox.com/en/downloads >>> Alternate ISO download: >>> http://download.proxmox.com/iso/ >>> >>> Documentation >>> https://pve.proxmox.com/pve-docs/ >>> >>> Community Forum >>> https://forum.proxmox.com >>> >>> Source Code >>> https://git.proxmox.com >>> >>> Bugtracker >>> https://bugzilla.proxmox.com >>> >>> FAQ >>> Q: Can I dist-upgrade Proxmox VE 6.0 to 6.1 with apt? >>> A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade >>> >>> Q: Can I install Proxmox VE 6.1 on top of Debian Buster? >>> A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster >>> >>> Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.x and higher with Ceph Nautilus? >>> A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, please follow exactly the upgrade documentation. >>> https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0 >>> https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus >>> >>> Q: Where can I get more information about future feature updates? >>> A: Check our roadmap, forum, mailing list and subscribe to our newsletter. >>> >>> A big THANK YOU to our active community for all your feedback, testing, bug reporting and patch submitting! >>> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > From t.lamprecht at proxmox.com Thu Dec 5 14:00:34 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Thu, 5 Dec 2019 14:00:34 +0100 Subject: [PVE-User] Proxmox VE 6.1 released! In-Reply-To: <56e377c9-a052-25e4-7869-76786f6fb38e@gmail.com> References: <978c571a-74cf-04f8-f726-f439d46408c6@proxmox.com> <9d301eb5-743d-fb85-5197-feb6078e42b5@gmail.com> <56e377c9-a052-25e4-7869-76786f6fb38e@gmail.com> Message-ID: On 12/5/19 8:47 AM, Uwe Sauter wrote: > Am 05.12.19 um 07:58 schrieb Thomas Lamprecht: >> On 12/4/19 11:17 PM, Uwe Sauter wrote: >>> When trying to migrate VMs to a host that was already rebooted I get the following in the task viewer window in the web ui: >>> >>> Check VM 109: precondition check passed >>> Migrating VM 109 >>> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441. >>> [...] >>> Hope this is just cosmetic? >>> >> >> It is, but I'm wondering why you get this.. Migration was just started normally >> through the webinterface, or? > > > I selected the server on the left, then bulk actions, migrate, all running VMs, chose the target host and started migration. > OK, bulk migration was the thing, old gui could pass an undefined value for the new "with local disks" option for bulk migration, that wasn't caught correctly. Behavior was not impacted, but lots of ugly warnings are never nice. Fixed in master, thanks a lot for reporting! cheers, Thomas From gilberto.nunes32 at gmail.com Thu Dec 5 20:53:34 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 5 Dec 2019 16:53:34 -0300 Subject: [PVE-User] Storage iSCSI and multipath Message-ID: Hi there I have some doubt regarding multipath iscsi I have set up the multipath and it's ok... But my doubt is how can I point the storage in proxmox to see both IP to connect to iscsi storage? I tried this iscsi: VM-DATA portal 192.168.130.102 portal 192.168.131.102 target iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139OK content images but when I list the storage pvesm list VM-DATA got this error file /etc/pve/storage.cfg line 7 (section 'VM-DATA') - unable to parse value of 'portal': duplicate attribute --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From martin at holub.co.at Thu Dec 5 23:01:49 2019 From: martin at holub.co.at (Martin Holub) Date: Thu, 5 Dec 2019 23:01:49 +0100 Subject: [PVE-User] Storage iSCSI and multipath In-Reply-To: References: Message-ID: <6acecea8-3803-c499-e481-f4f4a4acec6a@holub.co.at> Am 05.12.19 um 20:53 schrieb Gilberto Nunes: > Hi there > > I have some doubt regarding multipath iscsi > I have set up the multipath and it's ok... > But my doubt is how can I point the storage in proxmox to see both IP to > connect to iscsi storage? > I tried this > iscsi: VM-DATA > portal 192.168.130.102 > portal 192.168.131.102 > target > iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139OK > content images > > but when I list the storage > pvesm list VM-DATA > got this error > > file /etc/pve/storage.cfg line 7 (section 'VM-DATA') - unable to parse > value of 'portal': duplicate attribute > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user Hi, If i haven't missed it, there is still no multipath Support within Proxmox? You will need to use something like multipath tools on debian, create a Blockdevice and then use with LVM or whatever. There is an Article in the Wiki covering the Setup: https://pve.proxmox.com/wiki/ISCSI_Multipath Best Martin From lindsay.mathieson at gmail.com Fri Dec 6 01:31:58 2019 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 6 Dec 2019 10:31:58 +1000 Subject: [PVE-User] pve5to6 : FAIL: Corosync transport explicitly set to 'udpu' instead of implicit default! Message-ID: <9557acab-f9a5-59e8-1124-75dd35fbfb7f@gmail.com> As per the subject, I have the error : "FAIL: Corosync transport explicitly set to 'udpu' instead of implicit default!" Can I ignore that for the upgrade? I had constant problems with multicast, udpu is quite reliable. -- Lindsay From t.lamprecht at proxmox.com Fri Dec 6 06:51:38 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 6 Dec 2019 06:51:38 +0100 Subject: [PVE-User] pve5to6 : FAIL: Corosync transport explicitly set to 'udpu' instead of implicit default! In-Reply-To: <9557acab-f9a5-59e8-1124-75dd35fbfb7f@gmail.com> References: <9557acab-f9a5-59e8-1124-75dd35fbfb7f@gmail.com> Message-ID: On 12/6/19 1:31 AM, Lindsay Mathieson wrote: > As per the subject, I have the error : "FAIL: Corosync transport explicitly set to 'udpu' instead of implicit default!" > > > Can I ignore that for the upgrade? I had constant problems with multicast, udpu is quite reliable. > FAILures from the checker script are (almost) *never* ignore-able. :) In this case you will be glad to hear that with corosync 3, a new transport technology was adoped, i.e., kronosnet. It currently is only capable of unicast. The corosync internal multicast-udp and udpu stack was depreacated and removed in favor of that. So having it set to udpu will fail the upgrade. See: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Cluster:_always_upgrade_to_Corosync_3_first In your case, and a healthy cluster, I'd drop the transport while *not* restarting corosync yet. That's a change which cannot be applied live, so corosync will ignore it for now. Then you can continue with the upgrade to corosync 3 - still on PVE 5/Stretch, see above. cheers, Thomas From lindsay.mathieson at gmail.com Fri Dec 6 07:27:20 2019 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 6 Dec 2019 16:27:20 +1000 Subject: [PVE-User] pve5to6 : FAIL: Corosync transport explicitly set to 'udpu' instead of implicit default! In-Reply-To: References: <9557acab-f9a5-59e8-1124-75dd35fbfb7f@gmail.com> Message-ID: Thanks, that did the trick, with some sweaty moments :) cluster all updated to corosync 3.0 and healthy. On Fri, 6 Dec 2019 at 15:51, Thomas Lamprecht wrote: > On 12/6/19 1:31 AM, Lindsay Mathieson wrote: > > As per the subject, I have the error : "FAIL: Corosync transport > explicitly set to 'udpu' instead of implicit default!" > > > > > > Can I ignore that for the upgrade? I had constant problems with > multicast, udpu is quite reliable. > > > > FAILures from the checker script are (almost) *never* ignore-able. :) > > In this case you will be glad to hear that with corosync 3, a new transport > technology was adoped, i.e., kronosnet. It currently is only capable of > unicast. The corosync internal multicast-udp and udpu stack was depreacated > and removed in favor of that. So having it set to udpu will fail the > upgrade. > > See: > > https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Cluster:_always_upgrade_to_Corosync_3_first > > > In your case, and a healthy cluster, I'd drop the transport while *not* > restarting corosync yet. That's a change which cannot be applied live, so > corosync will ignore it for now. Then you can continue with the upgrade > to corosync 3 - still on PVE 5/Stretch, see above. > > cheers, > Thomas > > -- Lindsay From t.lamprecht at proxmox.com Fri Dec 6 07:35:07 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 6 Dec 2019 07:35:07 +0100 Subject: [PVE-User] pve5to6 : FAIL: Corosync transport explicitly set to 'udpu' instead of implicit default! In-Reply-To: References: <9557acab-f9a5-59e8-1124-75dd35fbfb7f@gmail.com> Message-ID: <7c123583-3550-8070-24b4-c40c0796109b@proxmox.com> On 12/6/19 7:27 AM, Lindsay Mathieson wrote: > Thanks, that did the trick, with some sweaty moments :) cluster all updated > to corosync 3.0 and healthy. Great! Yeah, it's definitively slightly scary to do, but it was the only way we found working for a upgrade way forward - i.e., avoiding a cluster rebuild as it was necessary for the PVE 3.4 to 4.x upgrade. Hopefully the "to 7.x" will be again a smooth-as-silk upgrade :) cheers, Thomas > On Fri, 6 Dec 2019 at 15:51, Thomas Lamprecht > wrote: > >> On 12/6/19 1:31 AM, Lindsay Mathieson wrote: >>> As per the subject, I have the error : "FAIL: Corosync transport >> explicitly set to 'udpu' instead of implicit default!" >>> >>> >>> Can I ignore that for the upgrade? I had constant problems with >> multicast, udpu is quite reliable. >>> >> >> FAILures from the checker script are (almost) *never* ignore-able. :) >> >> In this case you will be glad to hear that with corosync 3, a new transport >> technology was adoped, i.e., kronosnet. It currently is only capable of >> unicast. The corosync internal multicast-udp and udpu stack was depreacated >> and removed in favor of that. So having it set to udpu will fail the >> upgrade. >> >> See: >> >> https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Cluster:_always_upgrade_to_Corosync_3_first >> >> >> In your case, and a healthy cluster, I'd drop the transport while *not* >> restarting corosync yet. That's a change which cannot be applied live, so >> corosync will ignore it for now. Then you can continue with the upgrade >> to corosync 3 - still on PVE 5/Stretch, see above. >> >> cheers, >> Thomas >> >> > From gilberto.nunes32 at gmail.com Fri Dec 6 13:32:38 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Fri, 6 Dec 2019 09:32:38 -0300 Subject: [PVE-User] Storage iSCSI and multipath In-Reply-To: References: Message-ID: Hi there I got it! I need to configure LVM on top of iSCSI in order to use LVM shared storage... RTFM! =D Thanks --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 5 de dez. de 2019 ?s 19:02, Martin Holub via pve-user < pve-user at pve.proxmox.com> escreveu: > > > > ---------- Forwarded message ---------- > From: Martin Holub > To: pve-user at pve.proxmox.com > Cc: > Bcc: > Date: Thu, 5 Dec 2019 23:01:49 +0100 > Subject: Re: [PVE-User] Storage iSCSI and multipath > Am 05.12.19 um 20:53 schrieb Gilberto Nunes: > > > Hi there > > > > I have some doubt regarding multipath iscsi > > I have set up the multipath and it's ok... > > But my doubt is how can I point the storage in proxmox to see both IP to > > connect to iscsi storage? > > I tried this > > iscsi: VM-DATA > > portal 192.168.130.102 > > portal 192.168.131.102 > > target > > > iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139OK > > content images > > > > but when I list the storage > > pvesm list VM-DATA > > got this error > > > > file /etc/pve/storage.cfg line 7 (section 'VM-DATA') - unable to parse > > value of 'portal': duplicate attribute > > --- > > Gilberto Nunes Ferreira > > > > (47) 3025-5907 > > (47) 99676-7530 - Whatsapp / Telegram > > > > Skype: gilberto.nunes36 > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > Hi, > > If i haven't missed it, there is still no multipath Support within > Proxmox? You will need to use something like multipath tools on debian, > create a Blockdevice and then use with LVM or whatever. There is an > Article in the Wiki covering the Setup: > https://pve.proxmox.com/wiki/ISCSI_Multipath > > Best > Martin > > > > > ---------- Forwarded message ---------- > From: Martin Holub via pve-user > To: pve-user at pve.proxmox.com > Cc: Martin Holub > Bcc: > Date: Thu, 5 Dec 2019 23:01:49 +0100 > Subject: Re: [PVE-User] Storage iSCSI and multipath > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From lindsay.mathieson at gmail.com Fri Dec 6 13:40:41 2019 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 6 Dec 2019 22:40:41 +1000 Subject: [PVE-User] Upgrade to 6.1 successfull, but no web ui on one node, cannot start VM's on that node Message-ID: <58fc7f03-8866-8d9a-75fd-5de076d51aae@gmail.com> 3 Node cluster - vnb, vnh. vni * Successful upgrade from 5.4 to 6.1 * Cluster status good for all three nodes pvecm status Cluster information ------------------- Name:???????????? Softlog Config Version:?? 15 Transport:??????? knet Secure auth:????? on Quorum information ------------------ Date:???????????? Fri Dec? 6 22:39:31 2019 Quorum provider:? corosync_votequorum Nodes:??????????? 3 Node ID:????????? 0x00000001 Ring ID:????????? 1.4c Quorate:????????? Yes Votequorum information ---------------------- Expected votes:?? 3 Highest expected: 3 Total votes:????? 3 Quorum:?????????? 2 Flags:??????????? Quorate Membership information ---------------------- ??? Nodeid????? Votes Name 0x00000001????????? 1 192.168.5.240 (local) 0x00000002????????? 1 192.168.5.245 0x00000003????????? 1 192.168.5.244 I can access the web ui on vnh & vni, but it just times out for vnb. Same when trying to access vnb info via the webui from other nodes. Any ideas? -- Lindsay From t.marx at proxmox.com Fri Dec 6 14:07:58 2019 From: t.marx at proxmox.com (Tim Marx) Date: Fri, 6 Dec 2019 14:07:58 +0100 (CET) Subject: [PVE-User] Upgrade to 6.1 successfull, but no web ui on one node, cannot start VM's on that node In-Reply-To: <58fc7f03-8866-8d9a-75fd-5de076d51aae@gmail.com> References: <58fc7f03-8866-8d9a-75fd-5de076d51aae@gmail.com> Message-ID: <484399534.52.1575637678275@webmail.proxmox.com> > Lindsay Mathieson hat am 6. Dezember 2019 13:40 geschrieben: > > > 3 Node cluster - vnb, vnh. vni > > * Successful upgrade from 5.4 to 6.1 > * Cluster status good for all three nodes > pvecm status > Cluster information > ------------------- > Name:???????????? Softlog > Config Version:?? 15 > Transport:??????? knet > Secure auth:????? on > > Quorum information > ------------------ > Date:???????????? Fri Dec? 6 22:39:31 2019 > Quorum provider:? corosync_votequorum > Nodes:??????????? 3 > Node ID:????????? 0x00000001 > Ring ID:????????? 1.4c > Quorate:????????? Yes > > Votequorum information > ---------------------- > Expected votes:?? 3 > Highest expected: 3 > Total votes:????? 3 > Quorum:?????????? 2 > Flags:??????????? Quorate > > Membership information > ---------------------- > ??? Nodeid????? Votes Name > 0x00000001????????? 1 192.168.5.240 (local) > 0x00000002????????? 1 192.168.5.245 > 0x00000003????????? 1 192.168.5.244 > > > I can access the web ui on vnh & vni, but it just times out for vnb. > Same when trying to access vnb info via the webui from other nodes. > > > Any ideas? > > -- > Lindsay > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user Try to ssh into that node and check the status of pve-cluster e.g. # systemctl status pve-cluster If it had problems while startup, try to restart the service. best regard From lindsay.mathieson at gmail.com Fri Dec 6 14:10:18 2019 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 6 Dec 2019 23:10:18 +1000 Subject: [PVE-User] Upgrade to 6.1 successfull, but no web ui on one node, cannot start VM's on that node In-Reply-To: <484399534.52.1575637678275@webmail.proxmox.com> References: <58fc7f03-8866-8d9a-75fd-5de076d51aae@gmail.com> <484399534.52.1575637678275@webmail.proxmox.com> Message-ID: <54c4afd5-c895-f449-1ca9-a6e8238d6196@gmail.com> On 6/12/2019 11:07 pm, Tim Marx wrote: > Try to ssh into that node and check the status of pve-cluster e.g. > # systemctl status pve-cluster > > If it had problems while startup, try to restart the service. Thanks - seems ok though: systemctl status pve-cluster ? pve-cluster.service - The Proxmox VE cluster filesystem ?? Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled) ?? Active: active (running) since Fri 2019-12-06 22:24:07 AEST; 44min ago ?Main PID: 2131 (pmxcfs) ??? Tasks: 6 (limit: 4915) ?? Memory: 30.2M ?? CGroup: /system.slice/pve-cluster.service ?????????? ??2131 /usr/bin/pmxcfs Dec 06 22:28:30 vnb pmxcfs[2131]: [status] notice: received log Dec 06 22:28:31 vnb pmxcfs[2131]: [status] notice: received log Dec 06 22:30:36 vnb pmxcfs[2131]: [status] notice: received log Dec 06 22:38:35 vnb pmxcfs[2131]: [status] notice: received log Dec 06 22:44:06 vnb pmxcfs[2131]: [status] notice: received log Dec 06 22:45:37 vnb pmxcfs[2131]: [status] notice: received log Dec 06 22:47:06 vnb pmxcfs[2131]: [status] notice: received log Dec 06 22:53:24 vnb pmxcfs[2131]: [status] notice: received log Dec 06 23:00:38 vnb pmxcfs[2131]: [status] notice: received log Dec 06 23:08:25 vnb pmxcfs[2131]: [status] notice: received log -- Lindsay From lindsay.mathieson at gmail.com Fri Dec 6 14:21:05 2019 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Fri, 6 Dec 2019 23:21:05 +1000 Subject: [PVE-User] Upgrade to 6.1 successfull, but no web ui on one node, cannot start VM's on that node In-Reply-To: <54c4afd5-c895-f449-1ca9-a6e8238d6196@gmail.com> References: <58fc7f03-8866-8d9a-75fd-5de076d51aae@gmail.com> <484399534.52.1575637678275@webmail.proxmox.com> <54c4afd5-c895-f449-1ca9-a6e8238d6196@gmail.com> Message-ID: Solved it - there were a lot off ssl errors in syslog, needed to run: * pvecm updatecerts? -f Dunno how it became a problem as I've never fiddled with custom certs On 6/12/2019 11:10 pm, Lindsay Mathieson wrote: > On 6/12/2019 11:07 pm, Tim Marx wrote: >> Try to ssh into that node and check the status of pve-cluster e.g. >> # systemctl status pve-cluster >> >> If it had problems while startup, try to restart the service. > > > Thanks - seems ok though: > > > systemctl status pve-cluster > ? pve-cluster.service - The Proxmox VE cluster filesystem > ?? Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; > vendor preset: enabled) > ?? Active: active (running) since Fri 2019-12-06 22:24:07 AEST; 44min ago > ?Main PID: 2131 (pmxcfs) > ??? Tasks: 6 (limit: 4915) > ?? Memory: 30.2M > ?? CGroup: /system.slice/pve-cluster.service > ?????????? ??2131 /usr/bin/pmxcfs > > Dec 06 22:28:30 vnb pmxcfs[2131]: [status] notice: received log > Dec 06 22:28:31 vnb pmxcfs[2131]: [status] notice: received log > Dec 06 22:30:36 vnb pmxcfs[2131]: [status] notice: received log > Dec 06 22:38:35 vnb pmxcfs[2131]: [status] notice: received log > Dec 06 22:44:06 vnb pmxcfs[2131]: [status] notice: received log > Dec 06 22:45:37 vnb pmxcfs[2131]: [status] notice: received log > Dec 06 22:47:06 vnb pmxcfs[2131]: [status] notice: received log > Dec 06 22:53:24 vnb pmxcfs[2131]: [status] notice: received log > Dec 06 23:00:38 vnb pmxcfs[2131]: [status] notice: received log > Dec 06 23:08:25 vnb pmxcfs[2131]: [status] notice: received log > > -- Lindsay From t.lamprecht at proxmox.com Fri Dec 6 16:05:36 2019 From: t.lamprecht at proxmox.com (Thomas Lamprecht) Date: Fri, 6 Dec 2019 16:05:36 +0100 Subject: [PVE-User] Upgrade to 6.1 successfull, but no web ui on one node, cannot start VM's on that node In-Reply-To: References: <58fc7f03-8866-8d9a-75fd-5de076d51aae@gmail.com> <484399534.52.1575637678275@webmail.proxmox.com> <54c4afd5-c895-f449-1ca9-a6e8238d6196@gmail.com> Message-ID: <92b4f6ca-261b-f623-3a44-5d9a83ee8ee4@proxmox.com> On 12/6/19 2:21 PM, Lindsay Mathieson wrote: > Solved it - there were a lot off ssl errors in syslog, needed to run: > > ?* pvecm updatecerts? -f > > > Dunno how it became a problem as I've never fiddled with custom certs > maybe you got hit by the stricter security policy on Debian 10: https://www.debian.org/releases/buster/amd64/release-notes/ch-information.en.html#openssl-defaults But our upgrade checker script should had noticed that, though - so not sure. > On 6/12/2019 11:10 pm, Lindsay Mathieson wrote: >> On 6/12/2019 11:07 pm, Tim Marx wrote: >>> Try to ssh into that node and check the status of pve-cluster e.g. >>> # systemctl status pve-cluster >>> >>> If it had problems while startup, try to restart the service. >> >> >> Thanks - seems ok though: >> >> >> systemctl status pve-cluster >> ? pve-cluster.service - The Proxmox VE cluster filesystem >> ?? Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled) >> ?? Active: active (running) since Fri 2019-12-06 22:24:07 AEST; 44min ago >> ?Main PID: 2131 (pmxcfs) >> ??? Tasks: 6 (limit: 4915) >> ?? Memory: 30.2M >> ?? CGroup: /system.slice/pve-cluster.service >> ?????????? ??2131 /usr/bin/pmxcfs >> >> Dec 06 22:28:30 vnb pmxcfs[2131]: [status] notice: received log >> Dec 06 22:28:31 vnb pmxcfs[2131]: [status] notice: received log >> Dec 06 22:30:36 vnb pmxcfs[2131]: [status] notice: received log >> Dec 06 22:38:35 vnb pmxcfs[2131]: [status] notice: received log >> Dec 06 22:44:06 vnb pmxcfs[2131]: [status] notice: received log >> Dec 06 22:45:37 vnb pmxcfs[2131]: [status] notice: received log >> Dec 06 22:47:06 vnb pmxcfs[2131]: [status] notice: received log >> Dec 06 22:53:24 vnb pmxcfs[2131]: [status] notice: received log >> Dec 06 23:00:38 vnb pmxcfs[2131]: [status] notice: received log >> Dec 06 23:08:25 vnb pmxcfs[2131]: [status] notice: received log >> >> > From gaio at sv.lnf.it Tue Dec 10 14:54:45 2019 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Tue, 10 Dec 2019 14:54:45 +0100 Subject: [PVE-User] Watchdog in containers? Message-ID: <20191210135445.GL3544@sv.lnf.it> I've a container that, suddenly, consume all the RAM available, start to swap and load climb to 10+ . Clearly, i've to find the guilty process (probably it is Samba) but in the meantime... there's some sort of 'watchdog' for containers? I can safely install 'watchdog' on containers, disable /dev/watchdog and configure to do a reboot if load go too high? eg: max-load-1 = 24 max-load-5 = 18 max-load-15 = 12 ? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From gilberto.nunes32 at gmail.com Wed Dec 11 19:55:09 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 11 Dec 2019 15:55:09 -0300 Subject: [PVE-User] Need help with iscsi Dell MD3200i and MULTIPATH Message-ID: Hi there I am using PVE 6.1-3 and trying to use iscsi over Dell Storage MD3200i. I have successful deploy the storage to give access to block device via 2 different IPs How ever, I can not deploy multipath properly. When I try multipath -ll give me nothing or multipath -v3 tells me that the wwid is not in the file BTW, I have used this site as reference: https://pve.proxmox.com/wiki/ISCSI_Multipath https://icicimov.github.io/blog/virtualization/Adding-iSCSI-shared-volume-to-Proxmox-to-support-Live-Migration/ Here's the /etc/multipath.conf: defaults { polling_interval 3 path_selector "round-robin 0" max_fds "max" path_grouping_policy multibus uid_attribute "ID_SERIAL" rr_min_io 100 failback immediate no_path_retry queue } blacklist { wwid .* devnode "^sda" device { vendor "DELL" product "Universal Xport" } } devices { device { vendor "DELL" product "MD32xxi" path_grouping_policy group_by_prio prio rdac #polling_interval 5 path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 #prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } blacklist_exceptions { wwid "361418770003f0d2800000a6c5df0a959" } multipaths { multipath { wwid "361418770003f0d2800000a6c5df0a959" alias mpath0 } } lsscsi [0:0:0:0] disk HGST HUS726040ALS210 KU27 /dev/sda [14:0:0:0] cd/dvd PLDS DVD+-RW DU-8A5LH 6D5M /dev/sr0 [15:0:0:0] disk DELL MD32xxi 0820 /dev/sdb [15:0:0:31] disk DELL Universal Xport 0820 - [16:0:0:0] disk DELL MD32xxi 0820 /dev/sdc [16:0:0:31] disk DELL Universal Xport 0820 - multipath -v3 Dec 11 15:53:42 | set open fds limit to 1048576/1048576 Dec 11 15:53:42 | loading //lib/multipath/libchecktur.so checker Dec 11 15:53:42 | checker tur: message table size = 3 Dec 11 15:53:42 | loading //lib/multipath/libprioconst.so prioritizer Dec 11 15:53:42 | foreign library "nvme" loaded successfully Dec 11 15:53:42 | sr0: blacklisted, udev property missing Dec 11 15:53:42 | sda: udev property ID_WWN whitelisted Dec 11 15:53:42 | sda: device node name blacklisted Dec 11 15:53:42 | sdb: udev property ID_WWN whitelisted Dec 11 15:53:42 | sdb: mask = 0x1f Dec 11 15:53:42 | sdb: dev_t = 8:16 Dec 11 15:53:42 | sdb: size = 2097152 Dec 11 15:53:42 | sdb: vendor = DELL Dec 11 15:53:42 | sdb: product = MD32xxi Dec 11 15:53:42 | sdb: rev = 0820 Dec 11 15:53:42 | find_hwe: found 2 hwtable matches for DELL:MD32xxi:0820 Dec 11 15:53:42 | sdb: h:b:t:l = 15:0:0:0 Dec 11 15:53:42 | sdb: tgt_node_name = iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139 Dec 11 15:53:42 | sdb: path state = running Dec 11 15:53:42 | sdb: 1011 cyl, 34 heads, 61 sectors/track, start at 0 Dec 11 15:53:42 | sdb: serial = 3CS00II Dec 11 15:53:42 | sdb: get_state Dec 11 15:53:42 | sdb: detect_checker = yes (setting: multipath internal) Dec 11 15:53:42 | loading //lib/multipath/libcheckrdac.so checker Dec 11 15:53:42 | checker rdac: message table size = 9 Dec 11 15:53:42 | sdb: path_checker = rdac (setting: storage device autodetected) Dec 11 15:53:42 | sdb: checker timeout = 30 s (setting: kernel sysfs) Dec 11 15:53:42 | sdb: rdac state = up Dec 11 15:53:42 | sdb: uid_attribute = ID_SERIAL (setting: multipath.conf defaults/devices section) Dec 11 15:53:42 | sdb: uid = 361418770003f0d2800000a6c5df0a959 (udev) Dec 11 15:53:42 | sdb: detect_prio = yes (setting: multipath internal) Dec 11 15:53:42 | loading //lib/multipath/libpriordac.so prioritizer Dec 11 15:53:42 | sdb: prio = rdac (setting: storage device configuration) Dec 11 15:53:42 | sdb: prio args = "" (setting: storage device configuration) Dec 11 15:53:42 | sdb: rdac prio = 11 Dec 11 15:53:42 | sdc: udev property ID_WWN whitelisted Dec 11 15:53:42 | sdc: mask = 0x1f Dec 11 15:53:42 | sdc: dev_t = 8:32 Dec 11 15:53:42 | sdc: size = 2097152 Dec 11 15:53:42 | sdc: vendor = DELL Dec 11 15:53:42 | sdc: product = MD32xxi Dec 11 15:53:42 | sdc: rev = 0820 Dec 11 15:53:42 | find_hwe: found 2 hwtable matches for DELL:MD32xxi:0820 Dec 11 15:53:42 | sdc: h:b:t:l = 16:0:0:0 Dec 11 15:53:42 | sdc: tgt_node_name = iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139 Dec 11 15:53:42 | sdc: path state = running Dec 11 15:53:42 | sdc: 1011 cyl, 34 heads, 61 sectors/track, start at 0 Dec 11 15:53:42 | sdc: serial = 3CS00II Dec 11 15:53:42 | sdc: get_state Dec 11 15:53:42 | sdc: detect_checker = yes (setting: multipath internal) Dec 11 15:53:42 | sdc: path_checker = rdac (setting: storage device autodetected) Dec 11 15:53:42 | sdc: checker timeout = 30 s (setting: kernel sysfs) Dec 11 15:53:42 | sdc: rdac state = up Dec 11 15:53:42 | sdc: uid_attribute = ID_SERIAL (setting: multipath.conf defaults/devices section) Dec 11 15:53:42 | sdc: uid = 361418770003f0d2800000a6c5df0a959 (udev) Dec 11 15:53:42 | sdc: detect_prio = yes (setting: multipath internal) Dec 11 15:53:42 | sdc: prio = rdac (setting: storage device configuration) Dec 11 15:53:42 | sdc: prio args = "" (setting: storage device configuration) Dec 11 15:53:42 | sdc: rdac prio = 11 Dec 11 15:53:42 | loop0: blacklisted, udev property missing Dec 11 15:53:42 | loop1: blacklisted, udev property missing Dec 11 15:53:42 | loop2: blacklisted, udev property missing Dec 11 15:53:42 | loop3: blacklisted, udev property missing Dec 11 15:53:42 | loop4: blacklisted, udev property missing Dec 11 15:53:42 | loop5: blacklisted, udev property missing Dec 11 15:53:42 | loop6: blacklisted, udev property missing Dec 11 15:53:42 | loop7: blacklisted, udev property missing Dec 11 15:53:42 | dm-0: blacklisted, udev property missing Dec 11 15:53:42 | dm-1: blacklisted, udev property missing ===== paths list ===== uuid hcil dev dev_t pri dm_st chk_st vend/pro 361418770003f0d2800000a6c5df0a959 15:0:0:0 sdb 8:16 11 undef undef DELL,MD3 361418770003f0d2800000a6c5df0a959 16:0:0:0 sdc 8:32 11 undef undef DELL,MD3 Dec 11 15:53:42 | libdevmapper version 1.02.155 (2018-12-18) Dec 11 15:53:42 | DM multipath kernel driver v1.13.0 Dec 11 15:53:42 | sdb: udev property ID_WWN whitelisted Dec 11 15:53:42 | sdb: wwid 361418770003f0d2800000a6c5df0a959 whitelisted Dec 11 15:53:42 | wwid 361418770003f0d2800000a6c5df0a959 not in wwids file, skipping sdb Dec 11 15:53:42 | sdb: orphan path, only one path Dec 11 15:53:42 | rdac prioritizer refcount 2 Dec 11 15:53:42 | sdc: udev property ID_WWN whitelisted Dec 11 15:53:42 | sdc: wwid 361418770003f0d2800000a6c5df0a959 whitelisted Dec 11 15:53:42 | wwid 361418770003f0d2800000a6c5df0a959 not in wwids file, skipping sdc Dec 11 15:53:42 | sdc: orphan path, only one path Dec 11 15:53:42 | rdac prioritizer refcount 1 Dec 11 15:53:42 | unloading rdac prioritizer Dec 11 15:53:42 | unloading const prioritizer Dec 11 15:53:42 | unloading rdac checker Dec 11 15:53:42 | unloading tur checker I will appreciated any help. Thanks --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From gilberto.nunes32 at gmail.com Wed Dec 11 20:25:21 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 11 Dec 2019 16:25:21 -0300 Subject: [PVE-User] Need help with iscsi Dell MD3200i and MULTIPATH In-Reply-To: References: Message-ID: After spending hours trying to fix this, I finally figure it out: After fill the file /etc/multipath/wwids with the wwids of the iscsi disk, like this: cat /etc/multipath/wwids # Multipath wwids, Version : 1.0 # NOTE: This file is automatically maintained by multipath and multipathd. # You should not need to edit this file in normal circumstances. # # Valid WWIDs: /361418770003f0d2800000a6c5df0a959/ Now everything is work properly multipath -ll mylun (361418770003f0d2800000a6c5df0a959) dm-2 DELL,MD32xxi size=1.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw `-+- policy='round-robin 0' prio=9 status=active |- 15:0:0:0 sdb 8:16 active ready running `- 16:0:0:0 sdc 8:32 active ready running ls /dev/mapper/ control mylun pve-root pve-swap I don't know what's happen here, but in my case I needed to do this manually! This file in /etc/mulitpath/wwids should not be full fill automatically by multipath??? So weird! --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qua., 11 de dez. de 2019 ?s 15:55, Gilberto Nunes < gilberto.nunes32 at gmail.com> escreveu: > Hi there > > I am using PVE 6.1-3 and trying to use iscsi over Dell Storage MD3200i. > I have successful deploy the storage to give access to block device via 2 > different IPs > How ever, I can not deploy multipath properly. > When I try multipath -ll give me nothing or multipath -v3 tells me that > the wwid is not in the file > BTW, I have used this site as reference: > https://pve.proxmox.com/wiki/ISCSI_Multipath > > > > https://icicimov.github.io/blog/virtualization/Adding-iSCSI-shared-volume-to-Proxmox-to-support-Live-Migration/ > > > Here's the /etc/multipath.conf: > defaults { > polling_interval 3 > path_selector "round-robin 0" > max_fds "max" > path_grouping_policy multibus > uid_attribute "ID_SERIAL" > rr_min_io 100 > failback immediate > no_path_retry queue > } > > blacklist { > wwid .* > devnode "^sda" > device { > vendor "DELL" > product "Universal Xport" > } > } > devices { > device { > vendor "DELL" > product "MD32xxi" > path_grouping_policy group_by_prio > prio rdac > #polling_interval 5 > path_checker rdac > path_selector "round-robin 0" > hardware_handler "1 rdac" > failback immediate > features "2 pg_init_retries 50" > no_path_retry 30 > rr_min_io 100 > #prio_callout "/sbin/mpath_prio_rdac /dev/%n" > } > } > blacklist_exceptions { > wwid "361418770003f0d2800000a6c5df0a959" > } > multipaths { > multipath { > wwid "361418770003f0d2800000a6c5df0a959" > alias mpath0 > } > > } > lsscsi > [0:0:0:0] disk HGST HUS726040ALS210 KU27 /dev/sda > [14:0:0:0] cd/dvd PLDS DVD+-RW DU-8A5LH 6D5M /dev/sr0 > [15:0:0:0] disk DELL MD32xxi 0820 /dev/sdb > [15:0:0:31] disk DELL Universal Xport 0820 - > [16:0:0:0] disk DELL MD32xxi 0820 /dev/sdc > [16:0:0:31] disk DELL Universal Xport 0820 - > > multipath -v3 > Dec 11 15:53:42 | set open fds limit to 1048576/1048576 > Dec 11 15:53:42 | loading //lib/multipath/libchecktur.so checker > Dec 11 15:53:42 | checker tur: message table size = 3 > Dec 11 15:53:42 | loading //lib/multipath/libprioconst.so prioritizer > Dec 11 15:53:42 | foreign library "nvme" loaded successfully > Dec 11 15:53:42 | sr0: blacklisted, udev property missing > Dec 11 15:53:42 | sda: udev property ID_WWN whitelisted > Dec 11 15:53:42 | sda: device node name blacklisted > Dec 11 15:53:42 | sdb: udev property ID_WWN whitelisted > Dec 11 15:53:42 | sdb: mask = 0x1f > Dec 11 15:53:42 | sdb: dev_t = 8:16 > Dec 11 15:53:42 | sdb: size = 2097152 > Dec 11 15:53:42 | sdb: vendor = DELL > Dec 11 15:53:42 | sdb: product = MD32xxi > Dec 11 15:53:42 | sdb: rev = 0820 > Dec 11 15:53:42 | find_hwe: found 2 hwtable matches for DELL:MD32xxi:0820 > Dec 11 15:53:42 | sdb: h:b:t:l = 15:0:0:0 > Dec 11 15:53:42 | sdb: tgt_node_name = > iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139 > Dec 11 15:53:42 | sdb: path state = running > Dec 11 15:53:42 | sdb: 1011 cyl, 34 heads, 61 sectors/track, start at 0 > Dec 11 15:53:42 | sdb: serial = 3CS00II > Dec 11 15:53:42 | sdb: get_state > Dec 11 15:53:42 | sdb: detect_checker = yes (setting: multipath internal) > Dec 11 15:53:42 | loading //lib/multipath/libcheckrdac.so checker > Dec 11 15:53:42 | checker rdac: message table size = 9 > Dec 11 15:53:42 | sdb: path_checker = rdac (setting: storage device > autodetected) > Dec 11 15:53:42 | sdb: checker timeout = 30 s (setting: kernel sysfs) > Dec 11 15:53:42 | sdb: rdac state = up > Dec 11 15:53:42 | sdb: uid_attribute = ID_SERIAL (setting: multipath.conf > defaults/devices section) > Dec 11 15:53:42 | sdb: uid = 361418770003f0d2800000a6c5df0a959 (udev) > Dec 11 15:53:42 | sdb: detect_prio = yes (setting: multipath internal) > Dec 11 15:53:42 | loading //lib/multipath/libpriordac.so prioritizer > Dec 11 15:53:42 | sdb: prio = rdac (setting: storage device configuration) > Dec 11 15:53:42 | sdb: prio args = "" (setting: storage device > configuration) > Dec 11 15:53:42 | sdb: rdac prio = 11 > Dec 11 15:53:42 | sdc: udev property ID_WWN whitelisted > Dec 11 15:53:42 | sdc: mask = 0x1f > Dec 11 15:53:42 | sdc: dev_t = 8:32 > Dec 11 15:53:42 | sdc: size = 2097152 > Dec 11 15:53:42 | sdc: vendor = DELL > Dec 11 15:53:42 | sdc: product = MD32xxi > Dec 11 15:53:42 | sdc: rev = 0820 > Dec 11 15:53:42 | find_hwe: found 2 hwtable matches for DELL:MD32xxi:0820 > Dec 11 15:53:42 | sdc: h:b:t:l = 16:0:0:0 > Dec 11 15:53:42 | sdc: tgt_node_name = > iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139 > Dec 11 15:53:42 | sdc: path state = running > Dec 11 15:53:42 | sdc: 1011 cyl, 34 heads, 61 sectors/track, start at 0 > Dec 11 15:53:42 | sdc: serial = 3CS00II > Dec 11 15:53:42 | sdc: get_state > Dec 11 15:53:42 | sdc: detect_checker = yes (setting: multipath internal) > Dec 11 15:53:42 | sdc: path_checker = rdac (setting: storage device > autodetected) > Dec 11 15:53:42 | sdc: checker timeout = 30 s (setting: kernel sysfs) > Dec 11 15:53:42 | sdc: rdac state = up > Dec 11 15:53:42 | sdc: uid_attribute = ID_SERIAL (setting: multipath.conf > defaults/devices section) > Dec 11 15:53:42 | sdc: uid = 361418770003f0d2800000a6c5df0a959 (udev) > Dec 11 15:53:42 | sdc: detect_prio = yes (setting: multipath internal) > Dec 11 15:53:42 | sdc: prio = rdac (setting: storage device configuration) > Dec 11 15:53:42 | sdc: prio args = "" (setting: storage device > configuration) > Dec 11 15:53:42 | sdc: rdac prio = 11 > Dec 11 15:53:42 | loop0: blacklisted, udev property missing > Dec 11 15:53:42 | loop1: blacklisted, udev property missing > Dec 11 15:53:42 | loop2: blacklisted, udev property missing > Dec 11 15:53:42 | loop3: blacklisted, udev property missing > Dec 11 15:53:42 | loop4: blacklisted, udev property missing > Dec 11 15:53:42 | loop5: blacklisted, udev property missing > Dec 11 15:53:42 | loop6: blacklisted, udev property missing > Dec 11 15:53:42 | loop7: blacklisted, udev property missing > Dec 11 15:53:42 | dm-0: blacklisted, udev property missing > Dec 11 15:53:42 | dm-1: blacklisted, udev property missing > ===== paths list ===== > uuid hcil dev dev_t pri dm_st chk_st > vend/pro > 361418770003f0d2800000a6c5df0a959 15:0:0:0 sdb 8:16 11 undef undef > DELL,MD3 > 361418770003f0d2800000a6c5df0a959 16:0:0:0 sdc 8:32 11 undef undef > DELL,MD3 > Dec 11 15:53:42 | libdevmapper version 1.02.155 (2018-12-18) > Dec 11 15:53:42 | DM multipath kernel driver v1.13.0 > Dec 11 15:53:42 | sdb: udev property ID_WWN whitelisted > Dec 11 15:53:42 | sdb: wwid 361418770003f0d2800000a6c5df0a959 whitelisted > Dec 11 15:53:42 | wwid 361418770003f0d2800000a6c5df0a959 not in wwids > file, skipping sdb > Dec 11 15:53:42 | sdb: orphan path, only one path > Dec 11 15:53:42 | rdac prioritizer refcount 2 > Dec 11 15:53:42 | sdc: udev property ID_WWN whitelisted > Dec 11 15:53:42 | sdc: wwid 361418770003f0d2800000a6c5df0a959 whitelisted > Dec 11 15:53:42 | wwid 361418770003f0d2800000a6c5df0a959 not in wwids > file, skipping sdc > Dec 11 15:53:42 | sdc: orphan path, only one path > Dec 11 15:53:42 | rdac prioritizer refcount 1 > Dec 11 15:53:42 | unloading rdac prioritizer > Dec 11 15:53:42 | unloading const prioritizer > Dec 11 15:53:42 | unloading rdac checker > Dec 11 15:53:42 | unloading tur checker > > I will appreciated any help. > > Thanks > > > > > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > From gilberto.nunes32 at gmail.com Wed Dec 11 20:38:55 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 11 Dec 2019 16:38:55 -0300 Subject: [PVE-User] Need help with iscsi Dell MD3200i and MULTIPATH In-Reply-To: References: Message-ID: But still I am in doubt... This is some Debian Buster bug???? --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qua., 11 de dez. de 2019 ?s 16:25, Gilberto Nunes < gilberto.nunes32 at gmail.com> escreveu: > After spending hours trying to fix this, I finally figure it out: > After fill the file /etc/multipath/wwids with the wwids of the iscsi disk, > like this: > cat /etc/multipath/wwids > # Multipath wwids, Version : 1.0 > # NOTE: This file is automatically maintained by multipath and multipathd. > # You should not need to edit this file in normal circumstances. > # > # Valid WWIDs: > /361418770003f0d2800000a6c5df0a959/ > > Now everything is work properly > multipath -ll > mylun (361418770003f0d2800000a6c5df0a959) dm-2 DELL,MD32xxi > size=1.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 > rdac' wp=rw > `-+- policy='round-robin 0' prio=9 status=active > |- 15:0:0:0 sdb 8:16 active ready running > `- 16:0:0:0 sdc 8:32 active ready running > ls /dev/mapper/ > control mylun pve-root pve-swap > > I don't know what's happen here, but in my case I needed to do this > manually! > This file in /etc/mulitpath/wwids should not be full fill automatically by > multipath??? So weird! > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qua., 11 de dez. de 2019 ?s 15:55, Gilberto Nunes < > gilberto.nunes32 at gmail.com> escreveu: > >> Hi there >> >> I am using PVE 6.1-3 and trying to use iscsi over Dell Storage MD3200i. >> I have successful deploy the storage to give access to block device via 2 >> different IPs >> How ever, I can not deploy multipath properly. >> When I try multipath -ll give me nothing or multipath -v3 tells me that >> the wwid is not in the file >> BTW, I have used this site as reference: >> https://pve.proxmox.com/wiki/ISCSI_Multipath >> >> >> >> https://icicimov.github.io/blog/virtualization/Adding-iSCSI-shared-volume-to-Proxmox-to-support-Live-Migration/ >> >> >> Here's the /etc/multipath.conf: >> defaults { >> polling_interval 3 >> path_selector "round-robin 0" >> max_fds "max" >> path_grouping_policy multibus >> uid_attribute "ID_SERIAL" >> rr_min_io 100 >> failback immediate >> no_path_retry queue >> } >> >> blacklist { >> wwid .* >> devnode "^sda" >> device { >> vendor "DELL" >> product "Universal Xport" >> } >> } >> devices { >> device { >> vendor "DELL" >> product "MD32xxi" >> path_grouping_policy group_by_prio >> prio rdac >> #polling_interval 5 >> path_checker rdac >> path_selector "round-robin 0" >> hardware_handler "1 rdac" >> failback immediate >> features "2 pg_init_retries 50" >> no_path_retry 30 >> rr_min_io 100 >> #prio_callout "/sbin/mpath_prio_rdac /dev/%n" >> } >> } >> blacklist_exceptions { >> wwid "361418770003f0d2800000a6c5df0a959" >> } >> multipaths { >> multipath { >> wwid "361418770003f0d2800000a6c5df0a959" >> alias mpath0 >> } >> >> } >> lsscsi >> [0:0:0:0] disk HGST HUS726040ALS210 KU27 /dev/sda >> [14:0:0:0] cd/dvd PLDS DVD+-RW DU-8A5LH 6D5M /dev/sr0 >> [15:0:0:0] disk DELL MD32xxi 0820 /dev/sdb >> [15:0:0:31] disk DELL Universal Xport 0820 - >> [16:0:0:0] disk DELL MD32xxi 0820 /dev/sdc >> [16:0:0:31] disk DELL Universal Xport 0820 - >> >> multipath -v3 >> Dec 11 15:53:42 | set open fds limit to 1048576/1048576 >> Dec 11 15:53:42 | loading //lib/multipath/libchecktur.so checker >> Dec 11 15:53:42 | checker tur: message table size = 3 >> Dec 11 15:53:42 | loading //lib/multipath/libprioconst.so prioritizer >> Dec 11 15:53:42 | foreign library "nvme" loaded successfully >> Dec 11 15:53:42 | sr0: blacklisted, udev property missing >> Dec 11 15:53:42 | sda: udev property ID_WWN whitelisted >> Dec 11 15:53:42 | sda: device node name blacklisted >> Dec 11 15:53:42 | sdb: udev property ID_WWN whitelisted >> Dec 11 15:53:42 | sdb: mask = 0x1f >> Dec 11 15:53:42 | sdb: dev_t = 8:16 >> Dec 11 15:53:42 | sdb: size = 2097152 >> Dec 11 15:53:42 | sdb: vendor = DELL >> Dec 11 15:53:42 | sdb: product = MD32xxi >> Dec 11 15:53:42 | sdb: rev = 0820 >> Dec 11 15:53:42 | find_hwe: found 2 hwtable matches for DELL:MD32xxi:0820 >> Dec 11 15:53:42 | sdb: h:b:t:l = 15:0:0:0 >> Dec 11 15:53:42 | sdb: tgt_node_name = >> iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139 >> Dec 11 15:53:42 | sdb: path state = running >> Dec 11 15:53:42 | sdb: 1011 cyl, 34 heads, 61 sectors/track, start at 0 >> Dec 11 15:53:42 | sdb: serial = 3CS00II >> Dec 11 15:53:42 | sdb: get_state >> Dec 11 15:53:42 | sdb: detect_checker = yes (setting: multipath internal) >> Dec 11 15:53:42 | loading //lib/multipath/libcheckrdac.so checker >> Dec 11 15:53:42 | checker rdac: message table size = 9 >> Dec 11 15:53:42 | sdb: path_checker = rdac (setting: storage device >> autodetected) >> Dec 11 15:53:42 | sdb: checker timeout = 30 s (setting: kernel sysfs) >> Dec 11 15:53:42 | sdb: rdac state = up >> Dec 11 15:53:42 | sdb: uid_attribute = ID_SERIAL (setting: multipath.conf >> defaults/devices section) >> Dec 11 15:53:42 | sdb: uid = 361418770003f0d2800000a6c5df0a959 (udev) >> Dec 11 15:53:42 | sdb: detect_prio = yes (setting: multipath internal) >> Dec 11 15:53:42 | loading //lib/multipath/libpriordac.so prioritizer >> Dec 11 15:53:42 | sdb: prio = rdac (setting: storage device configuration) >> Dec 11 15:53:42 | sdb: prio args = "" (setting: storage device >> configuration) >> Dec 11 15:53:42 | sdb: rdac prio = 11 >> Dec 11 15:53:42 | sdc: udev property ID_WWN whitelisted >> Dec 11 15:53:42 | sdc: mask = 0x1f >> Dec 11 15:53:42 | sdc: dev_t = 8:32 >> Dec 11 15:53:42 | sdc: size = 2097152 >> Dec 11 15:53:42 | sdc: vendor = DELL >> Dec 11 15:53:42 | sdc: product = MD32xxi >> Dec 11 15:53:42 | sdc: rev = 0820 >> Dec 11 15:53:42 | find_hwe: found 2 hwtable matches for DELL:MD32xxi:0820 >> Dec 11 15:53:42 | sdc: h:b:t:l = 16:0:0:0 >> Dec 11 15:53:42 | sdc: tgt_node_name = >> iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139 >> Dec 11 15:53:42 | sdc: path state = running >> Dec 11 15:53:42 | sdc: 1011 cyl, 34 heads, 61 sectors/track, start at 0 >> Dec 11 15:53:42 | sdc: serial = 3CS00II >> Dec 11 15:53:42 | sdc: get_state >> Dec 11 15:53:42 | sdc: detect_checker = yes (setting: multipath internal) >> Dec 11 15:53:42 | sdc: path_checker = rdac (setting: storage device >> autodetected) >> Dec 11 15:53:42 | sdc: checker timeout = 30 s (setting: kernel sysfs) >> Dec 11 15:53:42 | sdc: rdac state = up >> Dec 11 15:53:42 | sdc: uid_attribute = ID_SERIAL (setting: multipath.conf >> defaults/devices section) >> Dec 11 15:53:42 | sdc: uid = 361418770003f0d2800000a6c5df0a959 (udev) >> Dec 11 15:53:42 | sdc: detect_prio = yes (setting: multipath internal) >> Dec 11 15:53:42 | sdc: prio = rdac (setting: storage device configuration) >> Dec 11 15:53:42 | sdc: prio args = "" (setting: storage device >> configuration) >> Dec 11 15:53:42 | sdc: rdac prio = 11 >> Dec 11 15:53:42 | loop0: blacklisted, udev property missing >> Dec 11 15:53:42 | loop1: blacklisted, udev property missing >> Dec 11 15:53:42 | loop2: blacklisted, udev property missing >> Dec 11 15:53:42 | loop3: blacklisted, udev property missing >> Dec 11 15:53:42 | loop4: blacklisted, udev property missing >> Dec 11 15:53:42 | loop5: blacklisted, udev property missing >> Dec 11 15:53:42 | loop6: blacklisted, udev property missing >> Dec 11 15:53:42 | loop7: blacklisted, udev property missing >> Dec 11 15:53:42 | dm-0: blacklisted, udev property missing >> Dec 11 15:53:42 | dm-1: blacklisted, udev property missing >> ===== paths list ===== >> uuid hcil dev dev_t pri dm_st chk_st >> vend/pro >> 361418770003f0d2800000a6c5df0a959 15:0:0:0 sdb 8:16 11 undef undef >> DELL,MD3 >> 361418770003f0d2800000a6c5df0a959 16:0:0:0 sdc 8:32 11 undef undef >> DELL,MD3 >> Dec 11 15:53:42 | libdevmapper version 1.02.155 (2018-12-18) >> Dec 11 15:53:42 | DM multipath kernel driver v1.13.0 >> Dec 11 15:53:42 | sdb: udev property ID_WWN whitelisted >> Dec 11 15:53:42 | sdb: wwid 361418770003f0d2800000a6c5df0a959 whitelisted >> Dec 11 15:53:42 | wwid 361418770003f0d2800000a6c5df0a959 not in wwids >> file, skipping sdb >> Dec 11 15:53:42 | sdb: orphan path, only one path >> Dec 11 15:53:42 | rdac prioritizer refcount 2 >> Dec 11 15:53:42 | sdc: udev property ID_WWN whitelisted >> Dec 11 15:53:42 | sdc: wwid 361418770003f0d2800000a6c5df0a959 whitelisted >> Dec 11 15:53:42 | wwid 361418770003f0d2800000a6c5df0a959 not in wwids >> file, skipping sdc >> Dec 11 15:53:42 | sdc: orphan path, only one path >> Dec 11 15:53:42 | rdac prioritizer refcount 1 >> Dec 11 15:53:42 | unloading rdac prioritizer >> Dec 11 15:53:42 | unloading const prioritizer >> Dec 11 15:53:42 | unloading rdac checker >> Dec 11 15:53:42 | unloading tur checker >> >> I will appreciated any help. >> >> Thanks >> >> >> >> >> --- >> Gilberto Nunes Ferreira >> >> (47) 3025-5907 >> (47) 99676-7530 - Whatsapp / Telegram >> >> Skype: gilberto.nunes36 >> >> >> >> From gilberto.nunes32 at gmail.com Thu Dec 12 14:23:53 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 12 Dec 2019 10:23:53 -0300 Subject: [PVE-User] Predicable name for ethernet devices... Message-ID: Hi guys Is there any negative effect if I change the predicable name for ethernet devices, from eno1, eno2, etc, to eth0, eth1.... ???? Thanks a lot --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From fatalerrors at geoffray-levasseur.org Thu Dec 12 15:56:31 2019 From: fatalerrors at geoffray-levasseur.org (Geoffray Levasseur) Date: Thu, 12 Dec 2019 14:56:31 +0000 Subject: [PVE-User] Predicable name for ethernet devices... In-Reply-To: References: Message-ID: <7f1559bd558540fd791cd95c8ca02d5b@geoffray-levasseur.org> Le 12 d?cembre 2019 14:23 "Gilberto Nunes" a ?crit: > Hi guys > > Is there any negative effect if I change the predicable name for ethernet > devices, from eno1, eno2, etc, to eth0, eth1.... ???? > > Thanks a lot > > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 Hi, I'm doing that on most of my servers and it work very fine. But I'm not using ethx anymore. I prefer using more explicit names that have the side effect to avoid conflicts with internal kernel names (that are still ethx). So I use adm0 for administration, lan0, linux lan, win0 for windows, wlan0 for wifi or wan0 for public ports... Everything is much clear then. regards, -- Geoffray Levasseur Technicien UPS - UMR CNRS 5566 / LEGOS - Service Informatique http://www.geoffray-levasseur.org/ GNU/PG public key : C89E D6C4 8BFC C9F2 EEFB 908C 89C2 CD4D CD9E 23AA Quod gratis asseritur gratis negatur. From frederic at juliana-multimedia.com Thu Dec 12 17:00:47 2019 From: frederic at juliana-multimedia.com (=?UTF-8?B?RnLDqWTDqXJpYyBNQVNTT1Q=?=) Date: Thu, 12 Dec 2019 17:00:47 +0100 Subject: [PVE-User] Discard transmission between VM / LVM / mdadm layers ? Message-ID: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> Hi, I have a question about the discard/trim transmission between the VM / LVM / mdadm layers up to the SSD. I have a server with four disks mounted in a RAID 10 array with mdadm. On this RAID 10 array, there is a volume group with LVM, which contains multiple logical volumes for the hypervisor and VMs. fstrim is periodically launched on the hypervisor and VMs. I know that : - A VM can pass discard/trim commands to the lower layer if it uses a "Virtio SCSI" controller. - LVM (since 2.02.85 with issue_discards enabled) can pass discard/trim to the lower layer during remove or reduce operations of a logical volume. - mdadm supports discard/trim since kernel 3.7. When a VM sends a discard/trim command, is it sent to the SSD, LVM does not block the command? Or is it useless, because mdadm handles discard/trim in his own way? Regards. -- ============================================== | FR?D?RIC MASSOT | | http://www.juliana-multimedia.com | | mailto:frederic at juliana-multimedia.com | | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | ===========================Debian=GNU/Linux=== From gilberto.nunes32 at gmail.com Thu Dec 12 17:55:59 2019 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 12 Dec 2019 13:55:59 -0300 Subject: [PVE-User] =?utf-8?b?4LiV4Lit4Lia4LiB4Lil4Lix4LiaOiAgTmVlZCBo?= =?utf-8?q?elp_with_iscsi_Dell_MD3200i_and_MULTIPATH?= In-Reply-To: References: Message-ID: Hi there Thanks --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 12 de dez. de 2019 ?s 00:19, Kriengsak Panphotong escreveu: > Hi list, > > I found solution on this link. > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=932307 > > you need add : "find_multipaths yes" to /etc/multipath.conf > > # multipath.conf example > defaults { > polling_interval 2 > path_selector "round-robin 0" > path_grouping_policy multibus > uid_attribute ID_SERIAL > rr_min_io 100 > failback immediate > no_path_retry queue > user_friendly_names yes > find_multipaths yes > } > > after add , restart multipath-tools (/etc/init.d/multipath-tools restart) > its work great for me. > > ------------------------------ > *???:* pve-user ???????? Gilberto > Nunes > *???:* 12 ??????? 2562 02:38 > *???:* pve-user at pve.proxmox.com > *??????????:* Re: [PVE-User] Need help with iscsi Dell MD3200i and > MULTIPATH > > But still I am in doubt... This is some Debian Buster bug???? > > > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qua., 11 de dez. de 2019 ?s 16:25, Gilberto Nunes < > gilberto.nunes32 at gmail.com> escreveu: > > > After spending hours trying to fix this, I finally figure it out: > > After fill the file /etc/multipath/wwids with the wwids of the iscsi > disk, > > like this: > > cat /etc/multipath/wwids > > # Multipath wwids, Version : 1.0 > > # NOTE: This file is automatically maintained by multipath and > multipathd. > > # You should not need to edit this file in normal circumstances. > > # > > # Valid WWIDs: > > /361418770003f0d2800000a6c5df0a959/ > > > > Now everything is work properly > > multipath -ll > > mylun (361418770003f0d2800000a6c5df0a959) dm-2 DELL,MD32xxi > > size=1.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 > > rdac' wp=rw > > `-+- policy='round-robin 0' prio=9 status=active > > |- 15:0:0:0 sdb 8:16 active ready running > > `- 16:0:0:0 sdc 8:32 active ready running > > ls /dev/mapper/ > > control mylun pve-root pve-swap > > > > I don't know what's happen here, but in my case I needed to do this > > manually! > > This file in /etc/mulitpath/wwids should not be full fill automatically > by > > multipath??? So weird! > > --- > > Gilberto Nunes Ferreira > > > > (47) 3025-5907 > > (47) 99676-7530 - Whatsapp / Telegram > > > > Skype: gilberto.nunes36 > > > > > > > > > > > > Em qua., 11 de dez. de 2019 ?s 15:55, Gilberto Nunes < > > gilberto.nunes32 at gmail.com> escreveu: > > > >> Hi there > >> > >> I am using PVE 6.1-3 and trying to use iscsi over Dell Storage MD3200i. > >> I have successful deploy the storage to give access to block device via > 2 > >> different IPs > >> How ever, I can not deploy multipath properly. > >> When I try multipath -ll give me nothing or multipath -v3 tells me that > >> the wwid is not in the file > >> BTW, I have used this site as reference: > >> https://pve.proxmox.com/wiki/ISCSI_Multipath > >> > >> > >> > >> > https://icicimov.github.io/blog/virtualization/Adding-iSCSI-shared-volume-to-Proxmox-to-support-Live-Migration/ > >> > >> > >> Here's the /etc/multipath.conf: > >> defaults { > >> polling_interval 3 > >> path_selector "round-robin 0" > >> max_fds "max" > >> path_grouping_policy multibus > >> uid_attribute "ID_SERIAL" > >> rr_min_io 100 > >> failback immediate > >> no_path_retry queue > >> } > >> > >> blacklist { > >> wwid .* > >> devnode "^sda" > >> device { > >> vendor "DELL" > >> product "Universal Xport" > >> } > >> } > >> devices { > >> device { > >> vendor "DELL" > >> product "MD32xxi" > >> path_grouping_policy group_by_prio > >> prio rdac > >> #polling_interval 5 > >> path_checker rdac > >> path_selector "round-robin 0" > >> hardware_handler "1 rdac" > >> failback immediate > >> features "2 pg_init_retries 50" > >> no_path_retry 30 > >> rr_min_io 100 > >> #prio_callout "/sbin/mpath_prio_rdac /dev/%n" > >> } > >> } > >> blacklist_exceptions { > >> wwid "361418770003f0d2800000a6c5df0a959" > >> } > >> multipaths { > >> multipath { > >> wwid "361418770003f0d2800000a6c5df0a959" > >> alias mpath0 > >> } > >> > >> } > >> lsscsi > >> [0:0:0:0] disk HGST HUS726040ALS210 KU27 /dev/sda > >> [14:0:0:0] cd/dvd PLDS DVD+-RW DU-8A5LH 6D5M /dev/sr0 > >> [15:0:0:0] disk DELL MD32xxi 0820 /dev/sdb > >> [15:0:0:31] disk DELL Universal Xport 0820 - > >> [16:0:0:0] disk DELL MD32xxi 0820 /dev/sdc > >> [16:0:0:31] disk DELL Universal Xport 0820 - > >> > >> multipath -v3 > >> Dec 11 15:53:42 | set open fds limit to 1048576/1048576 > >> Dec 11 15:53:42 | loading //lib/multipath/libchecktur.so checker > >> Dec 11 15:53:42 | checker tur: message table size = 3 > >> Dec 11 15:53:42 | loading //lib/multipath/libprioconst.so prioritizer > >> Dec 11 15:53:42 | foreign library "nvme" loaded successfully > >> Dec 11 15:53:42 | sr0: blacklisted, udev property missing > >> Dec 11 15:53:42 | sda: udev property ID_WWN whitelisted > >> Dec 11 15:53:42 | sda: device node name blacklisted > >> Dec 11 15:53:42 | sdb: udev property ID_WWN whitelisted > >> Dec 11 15:53:42 | sdb: mask = 0x1f > >> Dec 11 15:53:42 | sdb: dev_t = 8:16 > >> Dec 11 15:53:42 | sdb: size = 2097152 > >> Dec 11 15:53:42 | sdb: vendor = DELL > >> Dec 11 15:53:42 | sdb: product = MD32xxi > >> Dec 11 15:53:42 | sdb: rev = 0820 > >> Dec 11 15:53:42 | find_hwe: found 2 hwtable matches for > DELL:MD32xxi:0820 > >> Dec 11 15:53:42 | sdb: h:b:t:l = 15:0:0:0 > >> Dec 11 15:53:42 | sdb: tgt_node_name = > >> iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139 > >> Dec 11 15:53:42 | sdb: path state = running > >> Dec 11 15:53:42 | sdb: 1011 cyl, 34 heads, 61 sectors/track, start at 0 > >> Dec 11 15:53:42 | sdb: serial = 3CS00II > >> Dec 11 15:53:42 | sdb: get_state > >> Dec 11 15:53:42 | sdb: detect_checker = yes (setting: multipath > internal) > >> Dec 11 15:53:42 | loading //lib/multipath/libcheckrdac.so checker > >> Dec 11 15:53:42 | checker rdac: message table size = 9 > >> Dec 11 15:53:42 | sdb: path_checker = rdac (setting: storage device > >> autodetected) > >> Dec 11 15:53:42 | sdb: checker timeout = 30 s (setting: kernel sysfs) > >> Dec 11 15:53:42 | sdb: rdac state = up > >> Dec 11 15:53:42 | sdb: uid_attribute = ID_SERIAL (setting: > multipath.conf > >> defaults/devices section) > >> Dec 11 15:53:42 | sdb: uid = 361418770003f0d2800000a6c5df0a959 (udev) > >> Dec 11 15:53:42 | sdb: detect_prio = yes (setting: multipath internal) > >> Dec 11 15:53:42 | loading //lib/multipath/libpriordac.so prioritizer > >> Dec 11 15:53:42 | sdb: prio = rdac (setting: storage device > configuration) > >> Dec 11 15:53:42 | sdb: prio args = "" (setting: storage device > >> configuration) > >> Dec 11 15:53:42 | sdb: rdac prio = 11 > >> Dec 11 15:53:42 | sdc: udev property ID_WWN whitelisted > >> Dec 11 15:53:42 | sdc: mask = 0x1f > >> Dec 11 15:53:42 | sdc: dev_t = 8:32 > >> Dec 11 15:53:42 | sdc: size = 2097152 > >> Dec 11 15:53:42 | sdc: vendor = DELL > >> Dec 11 15:53:42 | sdc: product = MD32xxi > >> Dec 11 15:53:42 | sdc: rev = 0820 > >> Dec 11 15:53:42 | find_hwe: found 2 hwtable matches for > DELL:MD32xxi:0820 > >> Dec 11 15:53:42 | sdc: h:b:t:l = 16:0:0:0 > >> Dec 11 15:53:42 | sdc: tgt_node_name = > >> iqn.1984-05.com.dell:powervault.md3200i.61418770003f0d28000000005bbca139 > >> Dec 11 15:53:42 | sdc: path state = running > >> Dec 11 15:53:42 | sdc: 1011 cyl, 34 heads, 61 sectors/track, start at 0 > >> Dec 11 15:53:42 | sdc: serial = 3CS00II > >> Dec 11 15:53:42 | sdc: get_state > >> Dec 11 15:53:42 | sdc: detect_checker = yes (setting: multipath > internal) > >> Dec 11 15:53:42 | sdc: path_checker = rdac (setting: storage device > >> autodetected) > >> Dec 11 15:53:42 | sdc: checker timeout = 30 s (setting: kernel sysfs) > >> Dec 11 15:53:42 | sdc: rdac state = up > >> Dec 11 15:53:42 | sdc: uid_attribute = ID_SERIAL (setting: > multipath.conf > >> defaults/devices section) > >> Dec 11 15:53:42 | sdc: uid = 361418770003f0d2800000a6c5df0a959 (udev) > >> Dec 11 15:53:42 | sdc: detect_prio = yes (setting: multipath internal) > >> Dec 11 15:53:42 | sdc: prio = rdac (setting: storage device > configuration) > >> Dec 11 15:53:42 | sdc: prio args = "" (setting: storage device > >> configuration) > >> Dec 11 15:53:42 | sdc: rdac prio = 11 > >> Dec 11 15:53:42 | loop0: blacklisted, udev property missing > >> Dec 11 15:53:42 | loop1: blacklisted, udev property missing > >> Dec 11 15:53:42 | loop2: blacklisted, udev property missing > >> Dec 11 15:53:42 | loop3: blacklisted, udev property missing > >> Dec 11 15:53:42 | loop4: blacklisted, udev property missing > >> Dec 11 15:53:42 | loop5: blacklisted, udev property missing > >> Dec 11 15:53:42 | loop6: blacklisted, udev property missing > >> Dec 11 15:53:42 | loop7: blacklisted, udev property missing > >> Dec 11 15:53:42 | dm-0: blacklisted, udev property missing > >> Dec 11 15:53:42 | dm-1: blacklisted, udev property missing > >> ===== paths list ===== > >> uuid hcil dev dev_t pri dm_st chk_st > >> vend/pro > >> 361418770003f0d2800000a6c5df0a959 15:0:0:0 sdb 8:16 11 undef undef > >> DELL,MD3 > >> 361418770003f0d2800000a6c5df0a959 16:0:0:0 sdc 8:32 11 undef undef > >> DELL,MD3 > >> Dec 11 15:53:42 | libdevmapper version 1.02.155 (2018-12-18) > >> Dec 11 15:53:42 | DM multipath kernel driver v1.13.0 > >> Dec 11 15:53:42 | sdb: udev property ID_WWN whitelisted > >> Dec 11 15:53:42 | sdb: wwid 361418770003f0d2800000a6c5df0a959 > whitelisted > >> Dec 11 15:53:42 | wwid 361418770003f0d2800000a6c5df0a959 not in wwids > >> file, skipping sdb > >> Dec 11 15:53:42 | sdb: orphan path, only one path > >> Dec 11 15:53:42 | rdac prioritizer refcount 2 > >> Dec 11 15:53:42 | sdc: udev property ID_WWN whitelisted > >> Dec 11 15:53:42 | sdc: wwid 361418770003f0d2800000a6c5df0a959 > whitelisted > >> Dec 11 15:53:42 | wwid 361418770003f0d2800000a6c5df0a959 not in wwids > >> file, skipping sdc > >> Dec 11 15:53:42 | sdc: orphan path, only one path > >> Dec 11 15:53:42 | rdac prioritizer refcount 1 > >> Dec 11 15:53:42 | unloading rdac prioritizer > >> Dec 11 15:53:42 | unloading const prioritizer > >> Dec 11 15:53:42 | unloading rdac checker > >> Dec 11 15:53:42 | unloading tur checker > >> > >> I will appreciated any help. > >> > >> Thanks > >> > >> > >> > >> > >> --- > >> Gilberto Nunes Ferreira > >> > >> (47) 3025-5907 > >> (47) 99676-7530 - Whatsapp / Telegram > >> > >> Skype: gilberto.nunes36 > >> > >> > >> > >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From venefax at gmail.com Thu Dec 12 21:25:23 2019 From: venefax at gmail.com (Saint Michael) Date: Thu, 12 Dec 2019 15:25:23 -0500 Subject: [PVE-User] data partition not mounting proxmox 6.1 In-Reply-To: <58fc7f03-8866-8d9a-75fd-5de076d51aae@gmail.com> References: <58fc7f03-8866-8d9a-75fd-5de076d51aae@gmail.com> Message-ID: > > I installed the lates ISO from scratch, but the data partition did not > mount. So I formatted it using xfs and manually mounted it using mount /dev/pve/data /data and it mounts but if I add that to fstab, /dev/pve/data /data xfs defaults 0 1 it fails to mount and the box does not boot how do I fix this? I need to see my data partition via SSH Philip > From gianni.milo22 at gmail.com Fri Dec 13 06:57:08 2019 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Fri, 13 Dec 2019 05:57:08 +0000 Subject: [PVE-User] Discard transmission between VM / LVM / mdadm layers ? In-Reply-To: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> References: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> Message-ID: All involved layers, from guest OS down to the actual backing storage, must participate for a successful trim/discard support. For more information on this have a look at the documentation notes... https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_hard_disk_discard Regards, G. On Thu, 12 Dec 2019 at 16:01, Fr?d?ric MASSOT < frederic at juliana-multimedia.com> wrote: > Hi, > > I have a question about the discard/trim transmission between the VM / > LVM / mdadm layers up to the SSD. > > I have a server with four disks mounted in a RAID 10 array with mdadm. > On this RAID 10 array, there is a volume group with LVM, which contains > multiple logical volumes for the hypervisor and VMs. fstrim is > periodically launched on the hypervisor and VMs. > > I know that : > - A VM can pass discard/trim commands to the lower layer if it uses a > "Virtio SCSI" controller. > - LVM (since 2.02.85 with issue_discards enabled) can pass discard/trim > to the lower layer during remove or reduce operations of a logical volume. > - mdadm supports discard/trim since kernel 3.7. > > When a VM sends a discard/trim command, is it sent to the SSD, LVM does > not block the command? > Or is it useless, because mdadm handles discard/trim in his own way? > > > Regards. > -- > ============================================== > | FR?D?RIC MASSOT | > | http://www.juliana-multimedia.com | > | mailto:frederic at juliana-multimedia.com | > | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | > ===========================Debian=GNU/Linux=== > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From aderumier at odiso.com Fri Dec 13 13:15:37 2019 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Fri, 13 Dec 2019 13:15:37 +0100 (CET) Subject: [PVE-User] Discard transmission between VM / LVM / mdadm layers ? In-Reply-To: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> References: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> Message-ID: <1167030265.515974.1576239337145.JavaMail.zimbra@odiso.com> When a VM sends a discard/trim command, is it sent to the SSD, LVM does not block the command? Hi, yes it's working with lvm (lvm thin only) >>Or is it useless, because mdadm handles discard/trim in his own way? The trim command need to be send by the guest os. on linux guest : /etc/fstab discard option on mounpoint, or run fstrim command in cron, on windows: it's a periodic schedule task (in "optimize disk" tool) ----- Mail original ----- De: "Fr?d?ric Massot" ?: "proxmoxve" Envoy?: Jeudi 12 D?cembre 2019 17:00:47 Objet: [PVE-User] Discard transmission between VM / LVM / mdadm layers ? Hi, I have a question about the discard/trim transmission between the VM / LVM / mdadm layers up to the SSD. I have a server with four disks mounted in a RAID 10 array with mdadm. On this RAID 10 array, there is a volume group with LVM, which contains multiple logical volumes for the hypervisor and VMs. fstrim is periodically launched on the hypervisor and VMs. I know that : - A VM can pass discard/trim commands to the lower layer if it uses a "Virtio SCSI" controller. - LVM (since 2.02.85 with issue_discards enabled) can pass discard/trim to the lower layer during remove or reduce operations of a logical volume. - mdadm supports discard/trim since kernel 3.7. When a VM sends a discard/trim command, is it sent to the SSD, LVM does not block the command? Or is it useless, because mdadm handles discard/trim in his own way? Regards. -- ============================================== | FR?D?RIC MASSOT | | http://www.juliana-multimedia.com | | mailto:frederic at juliana-multimedia.com | | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | ===========================Debian=GNU/Linux=== _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From frederic at juliana-multimedia.com Fri Dec 13 16:52:41 2019 From: frederic at juliana-multimedia.com (=?UTF-8?B?RnLDqWTDqXJpYyBNQVNTT1Q=?=) Date: Fri, 13 Dec 2019 16:52:41 +0100 Subject: [PVE-User] Discard transmission between VM / LVM / mdadm layers ? In-Reply-To: <1167030265.515974.1576239337145.JavaMail.zimbra@odiso.com> References: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> <1167030265.515974.1576239337145.JavaMail.zimbra@odiso.com> Message-ID: Le 13/12/2019 ? 13:15, Alexandre DERUMIER a ?crit?: > When a VM sends a discard/trim command, is it sent to the SSD, LVM does > not block the command? > > Hi, yes it's working with lvm (lvm thin only) Ah, there is the subtlety, lvm-thin transmits discard/trim commands, but not lvm. It is therefore necessary to use the storage type lvm-thin and not lvm. I installed the server with the Debian installer and then I installed Proxmox. The Debian installer uses the type lvm and not lvm-thin. Regards. -- ============================================== | FR?D?RIC MASSOT | | http://www.juliana-multimedia.com | | mailto:frederic at juliana-multimedia.com | | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | ===========================Debian=GNU/Linux=== From olivier.benghozi at wifirst.fr Fri Dec 13 19:22:05 2019 From: olivier.benghozi at wifirst.fr (Olivier Benghozi) Date: Fri, 13 Dec 2019 19:22:05 +0100 Subject: [PVE-User] Discard transmission between VM / LVM / mdadm layers ? In-Reply-To: References: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> <1167030265.515974.1576239337145.JavaMail.zimbra@odiso.com> Message-ID: <90C241D6-C181-4074-8055-B8B2394A2CA8@wifirst.fr> In fact if you use the debian installer you have to take care of the creation of the data lvs (and in fact, of the general partitioning scheme) yourself (and not accept what the debian installer does by default). The lvs-thin data volume must be created manually after install (and therefore most of the space must be left free at the install ? and you'd better create it with 16GB metadata space to avoid horror stories later). By example this is what we can find here (installed using debian buster installer): # lsblk (edited) NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk ??sda1 8:1 0 512M 0 part /boot/efi ??sda2 8:2 0 488M 0 part /boot ??sda3 8:3 0 1.8T 0 part ??pve-root 253:0 0 186.3G 0 lvm / ??pve-swap 253:1 0 7.5G 0 lvm [SWAP] ??pve-data_tmeta 253:2 0 15.8G 0 lvm ? ??pve-data-tpool 253:4 0 1.5T 0 lvm ??pve-data_tdata 253:3 0 1.5T 0 lvm ??pve-data-tpool 253:4 0 1.5T 0 lvm # lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-aotz-- <1.53t 0.35 0.16 [data_tdata] pve Twi-ao---- <1.53t [data_tmeta] pve ewi-ao---- 16.00g [lvol0_pmspare] pve ewi------- 16.00g root pve -wi-ao---- 186.26g swap pve -wi-ao---- <7.45g > Le 13 d?c. 2019 ? 16:52, Fr?d?ric MASSOT a ?crit : > > Ah, there is the subtlety, lvm-thin transmits discard/trim commands, but > not lvm. It is therefore necessary to use the storage type lvm-thin and > not lvm. > > I installed the server with the Debian installer and then I installed > Proxmox. The Debian installer uses the type lvm and not lvm-thin. > From frederic at juliana-multimedia.com Sun Dec 15 12:51:01 2019 From: frederic at juliana-multimedia.com (=?UTF-8?B?RnLDqWTDqXJpYyBNQVNTT1Q=?=) Date: Sun, 15 Dec 2019 12:51:01 +0100 Subject: [PVE-User] Discard transmission between VM / LVM / mdadm layers ? In-Reply-To: <1167030265.515974.1576239337145.JavaMail.zimbra@odiso.com> References: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> <1167030265.515974.1576239337145.JavaMail.zimbra@odiso.com> Message-ID: <45f9bfb8-ce10-4a67-04c4-80cc5f14f2bb@juliana-multimedia.com> Le 13/12/2019 ? 13:15, Alexandre DERUMIER a ?crit?: > When a VM sends a discard/trim command, is it sent to the SSD, LVM does > not block the command? > > Hi, yes it's working with lvm (lvm thin only) Hi, I opened a bug report to the debian-installer package to see if it could add lvm-thin support. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=946682 Ben Hutchings answered this about the non transmission of discard by LVM: "This seems to be incorrect. LVM normally uses dm-table, which supports discard if the underlying device(s) do". I do not know what to think. Mdadm, the lower layer, supports discard. Regards. -- ============================================== | FR?D?RIC MASSOT | | http://www.juliana-multimedia.com | | mailto:frederic at juliana-multimedia.com | | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | ===========================Debian=GNU/Linux=== From mir at miras.org Sun Dec 15 13:13:21 2019 From: mir at miras.org (Michael Rasmussen) Date: Sun, 15 Dec 2019 13:13:21 +0100 Subject: [PVE-User] Discard transmission between VM / LVM / mdadm layers ? In-Reply-To: <45f9bfb8-ce10-4a67-04c4-80cc5f14f2bb@juliana-multimedia.com> References: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> <1167030265.515974.1576239337145.JavaMail.zimbra@odiso.com> <45f9bfb8-ce10-4a67-04c4-80cc5f14f2bb@juliana-multimedia.com> Message-ID: <20191215131321.1dd3fe55@sleipner.datanom.net> On Sun, 15 Dec 2019 12:51:01 +0100 Fr?d?ric MASSOT wrote: > Le 13/12/2019 ? 13:15, Alexandre DERUMIER a ?crit?: > > When a VM sends a discard/trim command, is it sent to the SSD, LVM > > does not block the command? > > > > Hi, yes it's working with lvm (lvm thin only) > > Hi, > > I opened a bug report to the debian-installer package to see if it > could add lvm-thin support. > LVM supports discard so there is no need to change to lvm-thin for that reason. From lvm.conf: " # Configuration option devices/issue_discards. # Issue discards to PVs that are no longer used by an LV. # Discards are sent to an LV's underlying physical volumes when the LV # is no longer using the physical volumes' space, e.g. lvremove, # lvreduce. Discards inform the storage that a region is no longer # used. Storage that supports discards advertise the protocol-specific # way discards should be issued by the kernel (TRIM, UNMAP, or # WRITE SAME with UNMAP bit set). Not all storage will support or # benefit from discards, but SSDs and thinly provisioned LUNs # generally do. If enabled, discards will only be issued if both the # storage and kernel provide support. issue_discards = 0 " -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E mir datanom net https://pgp.key-server.io/pks/lookup?search=0xE501F51C mir miras org https://pgp.key-server.io/pks/lookup?search=0xE3E80917 -------------------------------------------------------------- /usr/games/fortune -es says: Q: How many supply-siders does it take to change a light bulb? A: None. The darkness will cause the light bulb to change by itself. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From frederic at juliana-multimedia.com Sun Dec 15 17:20:14 2019 From: frederic at juliana-multimedia.com (=?UTF-8?B?RnLDqWTDqXJpYyBNQVNTT1Q=?=) Date: Sun, 15 Dec 2019 17:20:14 +0100 Subject: [PVE-User] Discard transmission between VM / LVM / mdadm layers ? In-Reply-To: <20191215131321.1dd3fe55@sleipner.datanom.net> References: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> <1167030265.515974.1576239337145.JavaMail.zimbra@odiso.com> <45f9bfb8-ce10-4a67-04c4-80cc5f14f2bb@juliana-multimedia.com> <20191215131321.1dd3fe55@sleipner.datanom.net> Message-ID: <956a4ffd-9703-0123-2f66-5071f0ce4a97@juliana-multimedia.com> Le 15/12/2019 ? 13:13, Michael Rasmussen a ?crit?: > On Sun, 15 Dec 2019 12:51:01 +0100 > Fr?d?ric MASSOT wrote: > >> Le 13/12/2019 ? 13:15, Alexandre DERUMIER a ?crit?: >>> When a VM sends a discard/trim command, is it sent to the SSD, LVM >>> does not block the command? >>> >>> Hi, yes it's working with lvm (lvm thin only) >> >> Hi, >> >> I opened a bug report to the debian-installer package to see if it >> could add lvm-thin support. >> > LVM supports discard so there is no need to change to lvm-thin for that > reason. > > From lvm.conf: > " # Configuration option devices/issue_discards. > # Issue discards to PVs that are no longer used by an LV. > # Discards are sent to an LV's underlying physical volumes when > the LV # is no longer using the physical volumes' space, e.g. lvremove, > # lvreduce. Discards inform the storage that a region is no > longer # used. Storage that supports discards advertise the > protocol-specific # way discards should be issued by the kernel (TRIM, > UNMAP, or # WRITE SAME with UNMAP bit set). Not all storage will > support or # benefit from discards, but SSDs and thinly provisioned LUNs > # generally do. If enabled, discards will only be issued if > both the # storage and kernel provide support. > issue_discards = 0 > " Yes, I know this description of the "issue_discards" option. I find this description incomplete. It is clearly stated that in the event of removal or reduction of the logical volume, the discard commands are sent. But nothing is said for common operations like deleting a file within the logical volume, or a periodic fstrim. It is unknown if discard commands are transmitted. It was this ambiguity that made me open this discussion. Regards. -- ============================================== | FR?D?RIC MASSOT | | http://www.juliana-multimedia.com | | mailto:frederic at juliana-multimedia.com | | +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | ===========================Debian=GNU/Linux=== From miguel_3_gonzalez at yahoo.es Sun Dec 15 21:16:27 2019 From: miguel_3_gonzalez at yahoo.es (=?UTF-8?Q?Miguel_Gonz=c3=a1lez?=) Date: Sun, 15 Dec 2019 21:16:27 +0100 Subject: Proxmox 5 to root on ZFS References: <09ea5410-ea5f-d61c-9d5a-5977df1fc400.ref@yahoo.es> Message-ID: <09ea5410-ea5f-d61c-9d5a-5977df1fc400@yahoo.es> Hi, ? I have a new dedicated server on OVH with two sets of SSD and SATA drives (4 in total). Template from OVH only installs Proxmox 5 without ZFS on a LVM soft RAID1. I have used this tool to create a ZFS pool: ? https://github.com/extremeshok/xshok-proxmox ? But I?m wondering if It would be worth the effort to try move the OS too to ZFS. What are the benefits? I am using this machine for disaster recovery VM (pve-zsync) and run backups. ? If It is worth it, any advice on how to do it? I can?t find a proper procedure, I just have a rescue system from OVH (not possible to use specific ISO or KVM) ? Thanks! ? Miguel From gianni.milo22 at gmail.com Mon Dec 16 00:30:31 2019 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Sun, 15 Dec 2019 23:30:31 +0000 Subject: [PVE-User] Proxmox 5 to root on ZFS In-Reply-To: References: <09ea5410-ea5f-d61c-9d5a-5977df1fc400.ref@yahoo.es> Message-ID: Don't have experience with OVH, but depending on the situation, it might be possible to "transfer" the existing OS installation to a ZFS backed setup. To do so, it will be required to have an additional hdd, partition it, create a ZFS pool on it with proper datasets (rpool/ROOT/pve-1 etc..), set the appropriate zfs options for each dataset (you can use an existing root on zfs installation as a reference) and rsync the existing PVE (lvm) installation onto it. Then chroot into the zfs installation and make the required changes in grub, initramfs,fstab etc etc, finally boot from the system from the zfs disk. It will require some effort, but should be doable. What are the benefits? Unless you want to benefit from the zfs snapshots for rolling back your system after an unwanted system update, I don't see any. Is it worth it? Don't think so, especially since this is a backup system. Since you've managed to create a zfs pool for storing your VMs (pve-zsync target), that should be enough in my opinion. B.R. G. On Sun, 15 Dec 2019 at 20:16, Miguel Gonz?lez via pve-user < pve-user at pve.proxmox.com> wrote: > > > > ---------- Forwarded message ---------- > From: "Miguel Gonz?lez" > To: "pve-user at pve.proxmox.com" > Cc: > Bcc: > Date: Sun, 15 Dec 2019 21:16:27 +0100 > Subject: Proxmox 5 to root on ZFS > Hi, > > I have a new dedicated server on OVH with two sets of SSD and SATA > drives (4 in total). Template from OVH only installs Proxmox 5 without > ZFS on a LVM soft RAID1. I have used this tool to create a ZFS pool: > > https://github.com/extremeshok/xshok-proxmox > > But I?m wondering if It would be worth the effort to try move the OS > too to ZFS. What are the benefits? I am using this machine for disaster > recovery VM (pve-zsync) and run backups. > > If It is worth it, any advice on how to do it? I can?t find a proper > procedure, I just have a rescue system from OVH (not possible to use > specific ISO or KVM) > > Thanks! > > Miguel > > > > > > ---------- Forwarded message ---------- > From: "Miguel Gonz?lez via pve-user" > To: "pve-user at pve.proxmox.com" > Cc: "Miguel Gonz?lez" > Bcc: > Date: Sun, 15 Dec 2019 21:16:27 +0100 > Subject: [PVE-User] Proxmox 5 to root on ZFS > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From jonny at lemonjuice.tk Mon Dec 16 01:05:19 2019 From: jonny at lemonjuice.tk (Jonny) Date: Mon, 16 Dec 2019 00:05:19 +0000 Subject: [PVE-User] Proxmox 5 to root on ZFS In-Reply-To: References: <09ea5410-ea5f-d61c-9d5a-5977df1fc400.ref@yahoo.es> Message-ID: <5964e501-f9d0-84b3-8e40-35faf5f930a4@lemonjuice.tk> Hey, I've not long setup 2 OVH (SoYouStart) Servers myself, with Fully encrypted ZFS with Proxmox. I think the best approach personally is to reinstall either using proxmox installer ISO via KVM. Then you can install with ZFS as you want. Then you can setup RAIDZ1 etc, allowing for one disk failure. If you want ZFS and full disk encryption, then its a much longer process but doable. See links below. Is there a reason you didn't use the KVM and install from ISO to start with? I followed these two guides: (tweaked bits here and there though) https://github.com/zfsonlinux/zfs/wiki/Debian-Buster-Root-on-ZFS https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster Jonny On 15/12/2019 23:30, Gianni Milo wrote: > Don't have experience with OVH, but depending on the situation, it might be > possible to "transfer" the existing OS installation to a ZFS backed setup. > > To do so, it will be required to have an additional hdd, partition it, > create a ZFS pool on it with proper datasets (rpool/ROOT/pve-1 etc..), set > the appropriate zfs options for each dataset (you can use an existing root > on zfs installation as a reference) and rsync the existing PVE (lvm) > installation onto it. Then chroot into the zfs installation and make the > required changes in grub, initramfs,fstab etc etc, finally boot from the > system from the zfs disk. It will require some effort, but should be > doable. > > What are the benefits? Unless you want to benefit from the zfs snapshots > for rolling back your system after an unwanted system update, I don't see > any. > > Is it worth it? Don't think so, especially since this is a backup system. > Since you've managed to create a zfs pool for storing your VMs (pve-zsync > target), that should be enough in my opinion. > > B.R. > G. > > > > On Sun, 15 Dec 2019 at 20:16, Miguel Gonz?lez via pve-user < > pve-user at pve.proxmox.com> wrote: > >> >> >> ---------- Forwarded message ---------- >> From: "Miguel Gonz?lez" >> To: "pve-user at pve.proxmox.com" >> Cc: >> Bcc: >> Date: Sun, 15 Dec 2019 21:16:27 +0100 >> Subject: Proxmox 5 to root on ZFS >> Hi, >> >> I have a new dedicated server on OVH with two sets of SSD and SATA >> drives (4 in total). Template from OVH only installs Proxmox 5 without >> ZFS on a LVM soft RAID1. I have used this tool to create a ZFS pool: >> >> https://github.com/extremeshok/xshok-proxmox >> >> But I?m wondering if It would be worth the effort to try move the OS >> too to ZFS. What are the benefits? I am using this machine for disaster >> recovery VM (pve-zsync) and run backups. >> >> If It is worth it, any advice on how to do it? I can?t find a proper >> procedure, I just have a rescue system from OVH (not possible to use >> specific ISO or KVM) >> >> Thanks! >> >> Miguel >> >> >> >> >> >> ---------- Forwarded message ---------- >> From: "Miguel Gonz?lez via pve-user" >> To: "pve-user at pve.proxmox.com" >> Cc: "Miguel Gonz?lez" >> Bcc: >> Date: Sun, 15 Dec 2019 21:16:27 +0100 >> Subject: [PVE-User] Proxmox 5 to root on ZFS >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Jonny From miguel_3_gonzalez at yahoo.es Mon Dec 16 14:11:08 2019 From: miguel_3_gonzalez at yahoo.es (Miguel Gonzalez) Date: Mon, 16 Dec 2019 13:11:08 +0000 (UTC) Subject: [PVE-User] Proxmox 5 to root on ZFS In-Reply-To: References: <09ea5410-ea5f-d61c-9d5a-5977df1fc400.ref@yahoo.es> Message-ID: <543214433.24965420.1576501868241@mail.yahoo.com> Thanks both for quickly answering my message. If you want to use KVM in soyoustart you need to pay. They offer pretested Proxmox templates so I used their default Proxmox template with LVM since they don?t have ZFS yet (maybe that will change in the near future). I asked about it would be worth in case of perfomance, but If you say I am fine with OS running on LVM RAID1 and keep other partitions with ZFS, It?s fine with me. My idea is to create a second pool for the 2 SATA drives so I can run some VMs separating the OS on the SSD ZFS pool and the data of the VM on the SATA ZFS pool. Thanks! Miguel En lunes, 16 de diciembre de 2019 1:05:37 CET, Jonny via pve-user escribi?: Hey, I've not long setup 2 OVH (SoYouStart) Servers myself, with Fully encrypted ZFS with Proxmox. I think the best approach personally is to reinstall either using proxmox installer ISO via KVM. Then you can install with ZFS as you want. Then you can setup RAIDZ1 etc, allowing for one disk failure. If you want ZFS and full disk encryption, then its a much longer process but doable. See links below. Is there a reason you didn't use the KVM and install from ISO to start with? I followed these two guides: (tweaked bits here and there though) https://github.com/zfsonlinux/zfs/wiki/Debian-Buster-Root-on-ZFS https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster Jonny On 15/12/2019 23:30, Gianni Milo wrote: > Don't have experience with OVH, but depending on the situation, it might be > possible to "transfer" the existing OS installation to a ZFS backed setup. > > To do so, it will be required to have an additional hdd, partition it, > create a ZFS pool on it with proper datasets (rpool/ROOT/pve-1 etc..), set > the appropriate zfs options for each dataset (you can use an existing root > on zfs installation as a reference) and rsync the existing PVE (lvm) > installation onto it. Then chroot into the zfs installation and make the > required changes in grub, initramfs,fstab etc etc, finally boot from the > system from the zfs disk. It will require some effort, but should be > doable. > > What are the benefits? Unless you want to benefit from the zfs snapshots > for rolling back your system after an unwanted system update, I don't see > any. > > Is it worth it? Don't think so, especially since this is a backup system. > Since you've managed to create a zfs pool for storing your VMs (pve-zsync > target), that should be enough in my opinion. > > B.R. > G. > > > > On Sun, 15 Dec 2019 at 20:16, Miguel Gonz?lez via pve-user < > pve-user at pve.proxmox.com> wrote: > >> >> >> ---------- Forwarded message ---------- >> From: "Miguel Gonz?lez" >> To: "pve-user at pve.proxmox.com" >> Cc: >> Bcc: >> Date: Sun, 15 Dec 2019 21:16:27 +0100 >> Subject: Proxmox 5 to root on ZFS >> Hi, >> >>? ? I have a new dedicated server on OVH with two sets of SSD and SATA >> drives (4 in total). Template from OVH only installs Proxmox 5 without >> ZFS on a LVM soft RAID1. I have used this tool to create a ZFS pool: >> >>? ? https://github.com/extremeshok/xshok-proxmox >> >>? ? But I?m wondering if It would be worth the effort to try move the OS >> too to ZFS. What are the benefits? I am using this machine for disaster >> recovery VM (pve-zsync) and run backups. >> >>? ? If It is worth it, any advice on how to do it? I can?t find a proper >> procedure, I just have a rescue system from OVH (not possible to use >> specific ISO or KVM) >> >>? ? Thanks! >> >>? ? Miguel >> >> >> >> >> >> ---------- Forwarded message ---------- >> From: "Miguel Gonz?lez via pve-user" >> To: "pve-user at pve.proxmox.com" >> Cc: "Miguel Gonz?lez" >> Bcc: >> Date: Sun, 15 Dec 2019 21:16:27 +0100 >> Subject: [PVE-User] Proxmox 5 to root on ZFS >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Jonny _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From jonny at lemonjuice.tk Mon Dec 16 16:30:35 2019 From: jonny at lemonjuice.tk (Jonny Proud) Date: Mon, 16 Dec 2019 15:30:35 +0000 Subject: [PVE-User] Proxmox 5 to root on ZFS In-Reply-To: References: <09ea5410-ea5f-d61c-9d5a-5977df1fc400.ref@yahoo.es> Message-ID: <9b110e1d-ac34-302a-1288-9946719d32a6@lemonjuice.tk> I'd say your better off with one big pool and do a Raidz1, so you have 3 drives of storage and 1 parity drive. There's a trick you can do with the Debian rescue, to be able to use any ISO installer you want without needing to order the silly priced KVM. If you run the following commands, you can then use VNC viewer to connect in and do the graphical installer. Note: you might need to change the drives bit if you've got less or more drives. I got the idea from here https://rinaldojonathan.com/en/cara-buat-virtual-kvm-di-server-ovh-kimsufi-soyoustart/ wget https://youriso.com/youriso.iso -O /tmp/install.iso qemu-system-x86_64 -net nic -net user,hostfwd=tcp::2222-:22 -m 20240M -localtime -enable-kvm \ -drive file=/dev/sda,if=virtio \ -drive file=/dev/sdb,if=virtio \ -drive file=/dev/sdc,if=virtio \ -cdrom /tmp/install.iso? \ ?-vnc 0.0.0.0:0 -boot d -smp threads=7 -cpu host Thanks both for quickly answering my message. If you want to use KVM in soyoustart you need to pay. They offer pretested Proxmox templates so I used their default Proxmox template with LVM since they don?t have ZFS yet (maybe that will change in the near future). I asked about it would be worth in case of perfomance, but If you say I am fine with OS running on LVM RAID1 and keep other partitions with ZFS, It?s fine with me. My idea is to create a second pool for the 2 SATA drives so I can run some VMs separating the OS on the SSD ZFS pool and the data of the VM on the SATA ZFS pool. Thanks! Miguel En lunes, 16 de diciembre de 2019 1:05:37 CET, Jonny via pve-user escribi?: Hey, I've not long setup 2 OVH (SoYouStart) Servers myself, with Fully encrypted ZFS with Proxmox. I think the best approach personally is to reinstall either using proxmox installer ISO via KVM. Then you can install with ZFS as you want. Then you can setup RAIDZ1 etc, allowing for one disk failure. If you want ZFS and full disk encryption, then its a much longer process but doable. See links below. Is there a reason you didn't use the KVM and install from ISO to start with? I followed these two guides: (tweaked bits here and there though) https://github.com/zfsonlinux/zfs/wiki/Debian-Buster-Root-on-ZFS https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster Jonny On 15/12/2019 23:30, Gianni Milo wrote: > Don't have experience with OVH, but depending on the situation, it might be > possible to "transfer" the existing OS installation to a ZFS backed setup. > > To do so, it will be required to have an additional hdd, partition it, > create a ZFS pool on it with proper datasets (rpool/ROOT/pve-1 etc..), set > the appropriate zfs options for each dataset (you can use an existing root > on zfs installation as a reference) and rsync the existing PVE (lvm) > installation onto it. Then chroot into the zfs installation and make the > required changes in grub, initramfs,fstab etc etc, finally boot from the > system from the zfs disk. It will require some effort, but should be > doable. > > What are the benefits? Unless you want to benefit from the zfs snapshots > for rolling back your system after an unwanted system update, I don't see > any. > > Is it worth it? Don't think so, especially since this is a backup system. > Since you've managed to create a zfs pool for storing your VMs (pve-zsync > target), that should be enough in my opinion. > > B.R. > G. > > > > On Sun, 15 Dec 2019 at 20:16, Miguel Gonz?lez via pve-user < > pve-user at pve.proxmox.com> wrote: > >> ---------- Forwarded message ---------- >> From: "Miguel Gonz?lez" >> To: "pve-user at pve.proxmox.com" >> Cc: >> Bcc: >> Date: Sun, 15 Dec 2019 21:16:27 +0100 >> Subject: Proxmox 5 to root on ZFS >> Hi, >> >> ? ? I have a new dedicated server on OVH with two sets of SSD and SATA >> drives (4 in total). Template from OVH only installs Proxmox 5 without >> ZFS on a LVM soft RAID1. I have used this tool to create a ZFS pool: >> >> ? ? https://github.com/extremeshok/xshok-proxmox >> >> ? ? But I?m wondering if It would be worth the effort to try move the OS >> too to ZFS. What are the benefits? I am using this machine for disaster >> recovery VM (pve-zsync) and run backups. >> >> ? ? If It is worth it, any advice on how to do it? I can?t find a proper >> procedure, I just have a rescue system from OVH (not possible to use >> specific ISO or KVM) >> >> ? ? Thanks! >> >> ? ? Miguel >> >> >> >> >> >> ---------- Forwarded message ---------- >> From: "Miguel Gonz?lez via pve-user" >> To: "pve-user at pve.proxmox.com" >> Cc: "Miguel Gonz?lez" >> Bcc: >> Date: Sun, 15 Dec 2019 21:16:27 +0100 >> Subject: [PVE-User] Proxmox 5 to root on ZFS >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Jonny -- Jonny From mark at tuxis.nl Mon Dec 16 17:19:11 2019 From: mark at tuxis.nl (Mark Schouten) Date: Mon, 16 Dec 2019 17:19:11 +0100 Subject: [PVE-User] Proxmox 5 to root on ZFS In-Reply-To: References: <09ea5410-ea5f-d61c-9d5a-5977df1fc400.ref@yahoo.es> Message-ID: <62f289282e1ca7bd38d97390e0253769@tuxis.nl> Here's what I did.. https://pve.proxmox.com/pipermail/pve-user/2018-November/170210.html -- Mark Schouten Tuxis, Ede, https://www.tuxis.nl T: +31 318 200208? ? ----- Originele bericht ----- Van: Miguel Gonz?lez via pve-user (pve-user at pve.proxmox.com) Datum: 15-12-2019 21:16 Naar: pve-user at pve.proxmox.com Cc: Miguel Gonz?lez (miguel_3_gonzalez at yahoo.es) Onderwerp: [PVE-User] Proxmox 5 to root on ZFS _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From piccardi at truelite.it Tue Dec 17 17:25:47 2019 From: piccardi at truelite.it (Simone Piccardi) Date: Tue, 17 Dec 2019 17:25:47 +0100 Subject: [PVE-User] Discard transmission between VM / LVM / mdadm layers ? In-Reply-To: <45f9bfb8-ce10-4a67-04c4-80cc5f14f2bb@juliana-multimedia.com> References: <6aa2c337-affc-402f-82c9-26272a40cd9a@juliana-multimedia.com> <1167030265.515974.1576239337145.JavaMail.zimbra@odiso.com> <45f9bfb8-ce10-4a67-04c4-80cc5f14f2bb@juliana-multimedia.com> Message-ID: <2d3d31f8-cc84-35fb-b3e4-92b0bf6f5048@truelite.it> Il 15/12/19 12:51, Fr?d?ric MASSOT ha scritto: > > Ben Hutchings answered this about the non transmission of discard by > LVM: "This seems to be incorrect. LVM normally uses dm-table, which > supports discard if the underlying device(s) do". > > I do not know what to think. Mdadm, the lower layer, supports discard. > Yes, LVM support discards, and they are just passed to the underlying disk anyway (lvm-thin is still LVM). But by using standard LVM on proxmox you allocate a fixed slice of disk, not just the disk space you are effectively using. The discard is just passed in the underlying disk. But this way you don't have snapshot, nor you can overcommit if your VMs are using only a small fraction of their disks. When you use lvm-thin the disk space is taken from a data pool volume, and the logical volumes used as virtual machine disks only use the disk space actually occupied. If you free some of this disk space a discard (given calling fstrim inside the VM) also free the unused disk space on the virtual disk from the data pool making it available again to other logical volumes. Simone -- Simone Piccardi Truelite Srl piccardi at truelite.it (email/jabber) Via Monferrato, 6 Tel. +39-347-1032433 50142 Firenze http://www.truelite.it Tel. +39-055-7879597 From f.thommen at dkfz-heidelberg.de Tue Dec 17 20:29:02 2019 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Tue, 17 Dec 2019 20:29:02 +0100 Subject: [PVE-User] Are two bridges to the same network possible? Message-ID: Hello, (how) can I connect two virtual bridges to the same network (using the same gateway)? Our currently-being-built-up PVE server has four Linux network bonds connecting to two three networks. One is being used for the PVE server itself (administration, web ui ecc), the other three for virtual machines which will reside in three different networks. One of these three networks is the same as the one where the PVE server is residing, but I'd still like to use separate NICs for the VMs. The server itself is attached to the default vmbr0 (using bond0). But as soon as I want to configure a second bridge vmbr1 (using bond1) to the same network, PROXMOX complains that the default gateway already exists. Is it technically possible/supported to have multiple bridges to the same network (with the same gateways)? Cheers frank From chris.hofstaedtler at deduktiva.com Tue Dec 17 21:42:29 2019 From: chris.hofstaedtler at deduktiva.com (Chris Hofstaedtler | Deduktiva) Date: Tue, 17 Dec 2019 21:42:29 +0100 Subject: [PVE-User] Are two bridges to the same network possible? In-Reply-To: References: Message-ID: <20191217204229.voaluiexilzytd4h@zeha.at> * Frank Thommen [191217 20:29]: > (how) can I connect two virtual bridges to the same network (using the same > gateway)? > > Our currently-being-built-up PVE server has four Linux network bonds > connecting to two three networks. One is being used for the PVE server > itself (administration, web ui ecc), the other three for virtual machines > which will reside in three different networks. One of these three networks > is the same as the one where the PVE server is residing, but I'd still like > to use separate NICs for the VMs. > > The server itself is attached to the default vmbr0 (using bond0). But as > soon as I want to configure a second bridge vmbr1 (using bond1) to the same > network, PROXMOX complains that the default gateway already exists. > > Is it technically possible/supported to have multiple bridges to the same > network (with the same gateways)? Do you actually need IP traffic on the host on the second interface? If not, then don't configure IP addresses, gateways, etc... Chris -- Chris Hofstaedtler / Deduktiva GmbH (FN 418592 b, HG Wien) www.deduktiva.com / +43 1 353 1707 From f.thommen at dkfz-heidelberg.de Tue Dec 17 22:10:24 2019 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Tue, 17 Dec 2019 22:10:24 +0100 Subject: [PVE-User] Are two bridges to the same network possible? In-Reply-To: <20191217204229.voaluiexilzytd4h@zeha.at> References: <20191217204229.voaluiexilzytd4h@zeha.at> Message-ID: On 17.12.19 21:42, Chris Hofstaedtler | Deduktiva wrote: > * Frank Thommen [191217 20:29]: >> (how) can I connect two virtual bridges to the same network (using the same >> gateway)? >> >> Our currently-being-built-up PVE server has four Linux network bonds >> connecting to two three networks. One is being used for the PVE server >> itself (administration, web ui ecc), the other three for virtual machines >> which will reside in three different networks. One of these three networks >> is the same as the one where the PVE server is residing, but I'd still like >> to use separate NICs for the VMs. >> >> The server itself is attached to the default vmbr0 (using bond0). But as >> soon as I want to configure a second bridge vmbr1 (using bond1) to the same >> network, PROXMOX complains that the default gateway already exists. >> >> Is it technically possible/supported to have multiple bridges to the same >> network (with the same gateways)? > > Do you actually need IP traffic on the host on the second interface? > If not, then don't configure IP addresses, gateways, etc... > > Chris > I probably don't understand the question correctly: One has the IP traffic to the server/hypervisor, the other has IP traffic for the attached VMs and containers. Is there no need for a gateway if the bridge only provisions VMs and containers with network connectivity? Should I think of the bridge as a dumb connector on the link level (I should probably and urgently reread my networking materials :-) Cheers, frank From chris.hofstaedtler at deduktiva.com Tue Dec 17 22:35:56 2019 From: chris.hofstaedtler at deduktiva.com (Chris Hofstaedtler | Deduktiva) Date: Tue, 17 Dec 2019 22:35:56 +0100 Subject: [PVE-User] Are two bridges to the same network possible? In-Reply-To: References: <20191217204229.voaluiexilzytd4h@zeha.at> Message-ID: <20191217213556.xv4zdrb5p4bhqmn2@zeha.at> * Frank Thommen [191217 22:10]: > I probably don't understand the question correctly: One has the IP traffic > to the server/hypervisor, the other has IP traffic for the attached VMs and > containers. Is there no need for a gateway if the bridge only provisions > VMs and containers with network connectivity? Should I think of the bridge > as a dumb connector on the link level (I should probably and urgently reread > my networking materials :-) You can think of it as a dumb switch. As a special feature, Linux allows you to also setup host network connectivity on a bridge - but it doesn't have to do that. I would guess that on the bridge you want to use for your VMs, you don't need an IP adress (and also no gateway then). Generally speaking, having more than one default gateway per host is an advanced configuration and you really need to know what you're doing then. Having multiple interfaces with IP addresses is a common thing, but you'llneed to understand how your routing setup works. Chris -- Chris Hofstaedtler / Deduktiva GmbH (FN 418592 b, HG Wien) www.deduktiva.com / +43 1 353 1707 From f.thommen at dkfz-heidelberg.de Wed Dec 18 17:05:34 2019 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Wed, 18 Dec 2019 17:05:34 +0100 Subject: [PVE-User] Are two bridges to the same network possible? In-Reply-To: <20191217213556.xv4zdrb5p4bhqmn2@zeha.at> References: <20191217204229.voaluiexilzytd4h@zeha.at> <20191217213556.xv4zdrb5p4bhqmn2@zeha.at> Message-ID: <05931446-2d24-c5ac-4aab-3d37a55c3bb7@dkfz-heidelberg.de> On 12/17/19 10:35 PM, Chris Hofstaedtler | Deduktiva wrote: > * Frank Thommen [191217 22:10]: >> I probably don't understand the question correctly: One has the IP traffic >> to the server/hypervisor, the other has IP traffic for the attached VMs and >> containers. Is there no need for a gateway if the bridge only provisions >> VMs and containers with network connectivity? Should I think of the bridge >> as a dumb connector on the link level (I should probably and urgently reread >> my networking materials :-) > > You can think of it as a dumb switch. As a special feature, Linux > allows you to also setup host network connectivity on a bridge - but > it doesn't have to do that. > > I would guess that on the bridge you want to use for your VMs, you > don't need an IP adress (and also no gateway then). > > Generally speaking, having more than one default gateway per host is > an advanced configuration and you really need to know what you're > doing then. Having multiple interfaces with IP addresses is a common > thing, but you'llneed to understand how your routing setup works. Thanks, that helps. I think I will additionally remove vmbr0 (using bond0), as this connection will not be used for any virtual machine or container, but only to access the hypervisor itself. frank