From f.gruenbichler at proxmox.com Mon Mar 2 08:24:02 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Mon, 02 Mar 2020 08:24:02 +0100 Subject: [PVE-User] Building pve-kernel fails In-Reply-To: <20200228192047.j6uleehiip2n3wxp@mew.swordarmor.fr> References: <20200228192047.j6uleehiip2n3wxp@mew.swordarmor.fr> Message-ID: <1583133363.74da21qisg.astroid@nora.none> On February 28, 2020 8:20 pm, Alarig Le Lay wrote: > Hi, > > I would like to test (and integrate) a patch to local kernels, but if I > try to build it, it fails: > > alarig at pikachu | master *%= pve-kernel % make > test -f "submodules/ubuntu-eoan/README" || git submodule update --init submodules/ubuntu-eoan > test -f "submodules/zfsonlinux/Makefile" || git submodule update --init --recursive submodules/zfsonlinux > rm -rf build/modules/pkg-zfs build/modules/tmp pkg-zfs.prepared > mkdir -p build/modules/tmp > cp -a submodules/zfsonlinux/* build/modules/tmp > cd build/modules/tmp; make kernel > make[1]: Entering directory '/home/alarig/Documents/git/proxmox.com/pve-kernel/build/modules/tmp' > make[1]: *** No rule to make target 'kernel'. Stop. > make[1]: Leaving directory '/home/alarig/Documents/git/proxmox.com/pve-kernel/build/modules/tmp' > make: *** [Makefile:99: pkg-zfs.prepared] Error 2 > zsh: exit 2 make > > build/modules/tmp doesn?t have any Makefile: > alarig at pikachu | master *%= pve-kernel % ls -lh build/modules/tmp > total 0 > drwxr-xr-x 1 alarig alarig 454 Feb 28 19:01 upstream > > Neither do the `upstream` directory: > alarig at pikachu | master *%= pve-kernel % ls -lh build/modules/tmp/upstream/ > total 100K > -rw-r--r-- 1 alarig alarig 13K Feb 28 19:01 AUTHORS > -rwxr-xr-x 1 alarig alarig 59 Feb 28 19:01 autogen.sh > drwxr-xr-x 1 alarig alarig 278 Feb 28 19:01 cmd > -rw-r--r-- 1 alarig alarig 154 Feb 28 19:01 CODE_OF_CONDUCT.md > drwxr-xr-x 1 alarig alarig 6.1K Feb 28 19:01 config > -rw-r--r-- 1 alarig alarig 15K Feb 28 19:01 configure.ac > drwxr-xr-x 1 alarig alarig 102 Feb 28 19:01 contrib > -rwxr-xr-x 1 alarig alarig 2.5K Feb 28 19:01 copy-builtin > -rw-r--r-- 1 alarig alarig 1.3K Feb 28 19:01 COPYRIGHT > drwxr-xr-x 1 alarig alarig 100 Feb 28 19:01 etc > drwxr-xr-x 1 alarig alarig 444 Feb 28 19:01 include > drwxr-xr-x 1 alarig alarig 222 Feb 28 19:01 lib > -rw-r--r-- 1 alarig alarig 19K Feb 28 19:01 LICENSE > -rw-r--r-- 1 alarig alarig 5.5K Feb 28 19:01 Makefile.am > drwxr-xr-x 1 alarig alarig 46 Feb 28 19:01 man > -rw-r--r-- 1 alarig alarig 208 Feb 28 19:01 META > drwxr-xr-x 1 alarig alarig 112 Feb 28 19:01 module > -rw-r--r-- 1 alarig alarig 97 Feb 28 19:01 NEWS > -rw-r--r-- 1 alarig alarig 1.2K Feb 28 19:01 NOTICE > -rw-r--r-- 1 alarig alarig 1.2K Feb 28 19:01 README.md > drwxr-xr-x 1 alarig alarig 48 Feb 28 19:01 rpm > drwxr-xr-x 1 alarig alarig 470 Feb 28 19:01 scripts > -rw-r--r-- 1 alarig alarig 2.4K Feb 28 19:01 TEST > drwxr-xr-x 1 alarig alarig 96 Feb 28 19:01 tests > drwxr-xr-x 1 alarig alarig 36 Feb 28 19:01 udev > -rw-r--r-- 1 alarig alarig 38 Feb 28 19:01 zfs.release.in > > Also, I had to update URLs in the git config because the shipped ones > are giving 404 (they don?t have the .git): > alarig at pikachu | master *%= pve-kernel % cat .git/config > [core] > repositoryformatversion = 0 > filemode = true > bare = false > logallrefupdates = true > [remote "origin"] > url = https://git.proxmox.com/git/pve-kernel.git > fetch = +refs/heads/*:refs/remotes/origin/* > [branch "master"] > remote = origin > merge = refs/heads/master > [submodule "submodules/ubuntu-eoan"] > active = true > url = https://git.proxmox.com/git/mirror_ubuntu-eoan-kernel.git > [submodule "submodules/zfsonlinux"] > active = true > url = https://git.proxmox.com/git/zfsonlinux.git > > Is there an updated Makefile for pve-kernel or is there anything to do > before the make? the problem is that the repositories are available under two different paths: https://git.proxmox.com/git/XXX.git git://git.proxmox.com/git/XXX the submodules only contain a relative reference, and where written for the second variant. if you checkout via the first, they don't work. thankfully, git:// also supports git/XXX.git, so we can update the references to include the '.git' suffix. to fix your local repository, you can either manually point the submodules to the right place (recursively), or start fresh and clone from 'git://git.proxmox.com/git/pve-kernel'. we'll fix the .gitmodules file shipped in the repository soon, but all the already cloned copies will need to be updated manually. From ronny+pve-user at aasen.cx Mon Mar 2 09:20:53 2020 From: ronny+pve-user at aasen.cx (Ronny Aasen) Date: Mon, 2 Mar 2020 09:20:53 +0100 Subject: [PVE-User] Create proxmox cluster / storage question. In-Reply-To: References: <7d5075bd-0442-6902-8296-b4cad572adba@binovo.es> Message-ID: <312fa294-b2ac-5fd5-af79-ed7b2ada39a4@aasen.cx> On 28.02.2020 15:25, Leandro Roggerone wrote: > Dear Eneko: > When you talk about redundacy and availability, do you want... > - HA? (automatic restart of VMs in the other node in case one server fails) > - Be able to move VMs from one server to the other "fast" (without > copying the disks)? > Yes , this is what I want. > So regarding my original layout from my 5.5Tb storage: > Im using 1T for LVM , 1TB for LVM-thin and 3.5 TB unassigned space, is it > ok to use this unassigned space for a ceph ? > Can I set it later ? with server on production ? > > Other: > Using an NFS system, means to have an external server running a file sistem? > So you should have at least two servers for the cluster and one for the > file system? > It seems to me that using ceph has a better redundancy plan an it is easier > to deploy since I only need two servers. (am i right?). > Thanks! > > El vie., 28 feb. 2020 a las 11:06, Eneko Lacunza () > escribi?: > >> Hola Leandro, >> for NFS you have just moved your SPOF from proxmox to NFS. so unless you have a HA NFS solution like a SAN with controller failover i would not think about NFS. for ceph you need 4 server to a have a proper fault tolerant HA system you have 3 copies of data. and a node as a failuredomain. you want the disk layout to be fairly identical on the servers. you can in theory run ceph on 3 nodes, but you will have a degraded cluster when you loose a node. and basically neuter the best features of ceph. so use 4 or more nodes. ceph really shines when you get to 8-10 nodes. also most defaults assume a 8 node cluster, so you may need to tweak some warning levels when you want to run with fewer. Also proxmox HA requires quorum. so proxmox also need 3 or more nodes. if you only have 2 nodes you can not do a reliable HA solution. But you can do a zfs replication system, where the running vm is replicated to the other server periodically. and you can start that backup vm when the main one dies. Good luck Ronny Aasen From elacunza at binovo.es Mon Mar 2 09:22:45 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Mon, 2 Mar 2020 09:22:45 +0100 Subject: [PVE-User] Create proxmox cluster / storage question. In-Reply-To: References: <7d5075bd-0442-6902-8296-b4cad572adba@binovo.es> Message-ID: <6b5f701a-0b3b-6f84-1992-4cb70f667ed2@binovo.es> Hola Leandro, El 28/2/20 a las 15:25, Leandro Roggerone escribi?: > When you talk about redundacy and availability, do you want... > - HA? (automatic restart of VMs in the other node in case one server fails) > - Be able to move VMs from one server to the other "fast" (without > copying the disks)? > Yes , this is what I want. > So regarding my original layout from my 5.5Tb storage: > Im using 1T for LVM , 1TB for LVM-thin and 3.5 TB unassigned space, is it > ok to use this unassigned space for a ceph ? > Can I set it later ? with server on production ? You can configure Ceph later, with the server in production, yes. But for Ceph you want single disks, no RAID. So I'd build the server with a 2x2TB RAID1 and leave the other 2x2TB disks for adding to Ceph. Also, read a bit about Ceph before deploying, it is not "sinple" :-) https://docs.ceph.com/docs/master/start/intro/ > Other: > Using an NFS system, means to have an external server running a file sistem? > So you should have at least two servers for the cluster and one for the > file system? > It seems to me that using ceph has a better redundancy plan an it is easier > to deploy since I only need two servers. (am i right?). For Ceph you need at least 3 servers; one of them can be a simple PC but you need it for Ceph monitor's quorum. It is recommended too for Proxmox cluster, for the same reason. Really a NFS based solution is simpler, but then NFS server is a simple point of failure. Ceph will be more resilient, but you have to understand how it works. You may find that having only two servers with Ceph storage can be risky when performing maintenance on one of the servers. Saludos Eneko > Thanks! > > El vie., 28 feb. 2020 a las 11:06, Eneko Lacunza () > escribi?: > >> Hola Leandro, >> >> El 28/2/20 a las 14:43, Leandro Roggerone escribi?: >>> Regarding your question , what is the tarjet use for this server. >>> I have a dell R610 with 6 drive bays. >>> Today I have 4 (2TB) drives in Raid5 , resulting a 5.5TB capacity. >>> I will add 2 ssd drives later in raid1 for applications that need more >> read >>> speed. >>> The purpose for this server is to run proxmox with some VMs for external >>> and internal access. >>> Im planning to build a second server and create a cluster just to have >> more >>> redundancy and availability. >>> >>> I would like to set all I can now that server is not in production and >>> minimize risk later. >>> Thats why im asking so many questions. >> Asking is good, but we need info to be able to help you ;) >> >> When you talk about redundacy and availability, do you want... >> - HA? (automatic restart of VMs in the other node in case one server fails) >> - Be able to move VMs from one server to the other "fast" (without >> copying the disks)? >> >> If your answer is yes to any of the previous questions, you have to look >> at using a NFS server or deploying Ceph. >> >> If it's no, then we can talk about local storage in your servers. What >> RAID card do you have in that server? Does it have write cache (non >> volatile of battery-backed) If it doesn't have such, RAID5 could prove >> slow (eat quite CPU), I suggest you use 2xRAID1 or a RAID10 setup. Also, >> please bear in mind that RAID5 with "big" disks is considered quite >> unsecure (risk of having a second disk failure during recovery is high). >> >> Saludos >> Eneko >>> Regards. >>> Leandro. >>> >>> >>> El vie., 28 feb. 2020 a las 5:49, Eneko Lacunza () >>> escribi?: >>> >>>> Hola Leandro, >>>> >>>> El 27/2/20 a las 17:29, Leandro Roggerone escribi?: >>>>> Hi guys , i'm still tunning my 5.5 Tb server. >>>>> While setting storage options during install process, I set 2000 for hd >>>>> size, so I have 3.5 TB free to assign later. >>>>> >>>>> my layout is as follows: >>>>> root at pve:~# lsblk >>>>> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT >>>>> sda 8:0 0 5.5T 0 disk >>>>> ??sda1 8:1 0 1007K 0 part >>>>> ??sda2 8:2 0 512M 0 part >>>>> ??sda3 8:3 0 2T 0 part >>>>> ??pve-swap 253:0 0 8G 0 lvm [SWAP] >>>>> ??pve-root 253:1 0 1T 0 lvm / >>>>> ??pve-data_tmeta 253:2 0 9G 0 lvm >>>>> ? ??pve-data 253:4 0 949.6G 0 lvm >>>>> ??pve-data_tdata 253:3 0 949.6G 0 lvm >>>>> ??pve-data 253:4 0 949.6G 0 lvm >>>>> sr0 11:0 1 1024M 0 rom >>>>> >>>>> My question is: >>>>> Is it possible to expand sda3 partition later without service outage ? >>>>> Is it possible to expand pve group on sda3 partition ? >>>> You don't need to expand sda3 really. You can just create a new >>>> partition, create a new PV with it and add the new PV to pve VG. >>>> >>>>> In case to create a proxmox cluster, what should I do with that 3.5 TB >>>> free >>>>> ? >>>> I don't know really how to reply to this. If you're building a cluster, >>>> I suggest you configure some kind of shared storage; NFS server or Ceph >>>> cluster for example. >>>> >>>>> Is there a best partition type suited for this ? Can I do it without >>>>> service outage? >>>> For what? >>>> >>>>> I have not any service running yet , so I can experiment what it takes. >>>>> Any thought about this would be great. >>>> Maybe you can start telling us your target use for this server/cluster. >>>> Also some detailed spec of the server would help; for example does it >>>> have a RAID card with more than one disk, or you're using a 6TB single >>>> disk? >>>> >>>> Cheers >>>> Eneko >>>> >>>> -- >>>> Zuzendari Teknikoa / Director T?cnico >>>> Binovo IT Human Project, S.L. >>>> Telf. 943569206 >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >>>> www.binovo.es >>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at pve.proxmox.com >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> -- >> Zuzendari Teknikoa / Director T?cnico >> Binovo IT Human Project, S.L. >> Telf. 943569206 >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >> www.binovo.es >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From ronny+pve-user at aasen.cx Mon Mar 2 09:25:06 2020 From: ronny+pve-user at aasen.cx (Ronny Aasen) Date: Mon, 2 Mar 2020 09:25:06 +0100 Subject: [PVE-User] osd replacement to bluestore or filestore In-Reply-To: <44474a04-596f-52db-c242-9c85053b197a@merit.unu.edu> References: <44474a04-596f-52db-c242-9c85053b197a@merit.unu.edu> Message-ID: <1a2c4a79-e3da-b554-7f73-d7316bf34273@aasen.cx> On 29.02.2020 12:21, mj wrote: > Hi, > > We have a failing filestore OSD HDD in our pve 5.4 cluster on ceph 12.2.13. > > I have ordered a replacement SSD, but we have the following doubt: > > Should we now replace the filestore HDD (journal on an SSD) with a > bluestore SSD? Or should we keep the new SSD in filestore config, in > order to minimise the time we run in 'mixed mode'. > > We have the intention to replace all filestore HDD OSDs with bluestore > SSDs, but not short term, starting in half year or so. > > So the question is really: can we run mixed bluestore/filestore ceph > cluster for an extended period of time? Anything particular to consider? > Many clusters have added new bluestore nodes, but not removed the filestore nodes until the servers reach end of life. so running mixed mode is not a problem. The problem is perhaps mainly in the added complexity and making troubleshooting trickier. and the changes in performance patterns especially when you mix on a single node. Ronny From elacunza at binovo.es Mon Mar 2 09:26:38 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Mon, 2 Mar 2020 09:26:38 +0100 Subject: [PVE-User] osd replacement to bluestore or filestore In-Reply-To: <44474a04-596f-52db-c242-9c85053b197a@merit.unu.edu> References: <44474a04-596f-52db-c242-9c85053b197a@merit.unu.edu> Message-ID: Hi MJ, El 29/2/20 a las 12:21, mj escribi?: > Hi, > > We have a failing filestore OSD HDD in our pve 5.4 cluster on ceph > 12.2.13. > > I have ordered a replacement SSD, but we have the following doubt: > > Should we now replace the filestore HDD (journal on an SSD) with a > bluestore SSD? Or should we keep the new SSD in filestore config, in > order to minimise the time we run in 'mixed mode'. > > We have the intention to replace all filestore HDD OSDs with bluestore > SSDs, but not short term, starting in half year or so. > > So the question is really: can we run mixed bluestore/filestore ceph > cluster for an extended period of time? Anything particular to consider? > We have mixed bluestore/filestore clusters in production. Other than the increased RAM use by bluestore OSDs, we haven't noticed anything special. There was a CRC-related bug generating false warnings in filestore OSDs but it was fixed some time ago and eventually warnings stopped popping up. Cheers Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From jm at ginernet.com Mon Mar 2 10:15:04 2020 From: jm at ginernet.com (=?UTF-8?Q?Jos=c3=a9_Manuel_Giner?=) Date: Mon, 2 Mar 2020 10:15:04 +0100 Subject: [PVE-User] pve.proxmox.com has no SPF Message-ID: Hello, all e-mails from this ML arrives as SPAM because there is not SPF record for the domain pve.proxmox.com Please fix it, thanks. -- Jos? Manuel Giner https://ginernet.com From gaio at sv.lnf.it Mon Mar 2 12:49:33 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Mon, 2 Mar 2020 12:49:33 +0100 Subject: [PVE-User] do_IRQ: No irq handler for vector Message-ID: <20200302114933.GK3973@sv.lnf.it> I've setup an home server, using an old HP ProLiant ML110G6 server, with PVE6, kernel 5.3.18-2-pve. I reveive, spuriously, logs like: root at ino:~# grep do_IRQ /var/log/kern.log Mar 1 05:10:51 ino kernel: [197698.269626] do_IRQ: 3.40 No irq handler for vector Mar 1 08:12:52 ino kernel: [208619.012511] do_IRQ: 2.41 No irq handler for vector Mar 1 11:59:10 ino kernel: [222196.730017] do_IRQ: 3.40 No irq handler for vector Mar 1 17:09:00 ino kernel: [240787.084289] do_IRQ: 2.41 No irq handler for vector Mar 1 17:19:10 ino kernel: [241396.356441] do_IRQ: 0.41 No irq handler for vector Mar 1 23:33:00 ino kernel: [263826.632318] do_IRQ: 1.40 No irq handler for vector Mar 2 00:08:51 ino kernel: [265976.985582] do_IRQ: 3.41 No irq handler for vector Mar 2 05:40:21 ino kernel: [285866.764386] do_IRQ: 1.41 No irq handler for vector Mar 2 08:48:00 ino kernel: [297125.421168] do_IRQ: 2.40 No irq handler for vector Mar 2 08:52:01 ino kernel: [297366.060392] do_IRQ: 2.41 No irq handler for vector Mar 2 11:32:01 ino kernel: [306965.868404] do_IRQ: 0.39 No irq handler for vector Server work as expected, without any trouble, crash, ... AFAI've understood, first numer is the core (i have 4), second is the interrupt vector. Googling around i've hit: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1781016 and effectively i've many 'serial' USB interfaces connected (2 USB serial port, an APC UPS, a GSM dongle). I've tried to, using the irq vecrot, to understand the peripheral, but the command proposed: grep 35 /sys/kernel/debug/irq/irqs/* does not work, there's no more '/sys/kernel/debug/irq/irqs/*' interface in sysfs. Someone have some hint? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From leandro at tecnetmza.com.ar Mon Mar 2 14:37:40 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Mon, 2 Mar 2020 10:37:40 -0300 Subject: [PVE-User] Create proxmox cluster / storage question. In-Reply-To: <6b5f701a-0b3b-6f84-1992-4cb70f667ed2@binovo.es> References: <7d5075bd-0442-6902-8296-b4cad572adba@binovo.es> <6b5f701a-0b3b-6f84-1992-4cb70f667ed2@binovo.es> Message-ID: Thanks for the feedback ! Leandro. El lun., 2 mar. 2020 a las 5:23, Eneko Lacunza () escribi?: > Hola Leandro, > > El 28/2/20 a las 15:25, Leandro Roggerone escribi?: > > When you talk about redundacy and availability, do you want... > > - HA? (automatic restart of VMs in the other node in case one server > fails) > > - Be able to move VMs from one server to the other "fast" (without > > copying the disks)? > > Yes , this is what I want. > > So regarding my original layout from my 5.5Tb storage: > > Im using 1T for LVM , 1TB for LVM-thin and 3.5 TB unassigned space, is > it > > ok to use this unassigned space for a ceph ? > > Can I set it later ? with server on production ? > You can configure Ceph later, with the server in production, yes. > > But for Ceph you want single disks, no RAID. So I'd build the server > with a 2x2TB RAID1 and leave the other 2x2TB disks for adding to Ceph. > > Also, read a bit about Ceph before deploying, it is not "sinple" :-) > https://docs.ceph.com/docs/master/start/intro/ > > > Other: > > Using an NFS system, means to have an external server running a file > sistem? > > So you should have at least two servers for the cluster and one for the > > file system? > > It seems to me that using ceph has a better redundancy plan an it is > easier > > to deploy since I only need two servers. (am i right?). > For Ceph you need at least 3 servers; one of them can be a simple PC but > you need it for Ceph monitor's quorum. It is recommended too for Proxmox > cluster, for the same reason. > > Really a NFS based solution is simpler, but then NFS server is a simple > point of failure. Ceph will be more resilient, but you have to > understand how it works. You may find that having only two servers with > Ceph storage can be risky when performing maintenance on one of the > servers. > > Saludos > Eneko > > > Thanks! > > > > El vie., 28 feb. 2020 a las 11:06, Eneko Lacunza () > > escribi?: > > > >> Hola Leandro, > >> > >> El 28/2/20 a las 14:43, Leandro Roggerone escribi?: > >>> Regarding your question , what is the tarjet use for this server. > >>> I have a dell R610 with 6 drive bays. > >>> Today I have 4 (2TB) drives in Raid5 , resulting a 5.5TB capacity. > >>> I will add 2 ssd drives later in raid1 for applications that need more > >> read > >>> speed. > >>> The purpose for this server is to run proxmox with some VMs for > external > >>> and internal access. > >>> Im planning to build a second server and create a cluster just to have > >> more > >>> redundancy and availability. > >>> > >>> I would like to set all I can now that server is not in production and > >>> minimize risk later. > >>> Thats why im asking so many questions. > >> Asking is good, but we need info to be able to help you ;) > >> > >> When you talk about redundacy and availability, do you want... > >> - HA? (automatic restart of VMs in the other node in case one server > fails) > >> - Be able to move VMs from one server to the other "fast" (without > >> copying the disks)? > >> > >> If your answer is yes to any of the previous questions, you have to look > >> at using a NFS server or deploying Ceph. > >> > >> If it's no, then we can talk about local storage in your servers. What > >> RAID card do you have in that server? Does it have write cache (non > >> volatile of battery-backed) If it doesn't have such, RAID5 could prove > >> slow (eat quite CPU), I suggest you use 2xRAID1 or a RAID10 setup. Also, > >> please bear in mind that RAID5 with "big" disks is considered quite > >> unsecure (risk of having a second disk failure during recovery is high). > >> > >> Saludos > >> Eneko > >>> Regards. > >>> Leandro. > >>> > >>> > >>> El vie., 28 feb. 2020 a las 5:49, Eneko Lacunza () > >>> escribi?: > >>> > >>>> Hola Leandro, > >>>> > >>>> El 27/2/20 a las 17:29, Leandro Roggerone escribi?: > >>>>> Hi guys , i'm still tunning my 5.5 Tb server. > >>>>> While setting storage options during install process, I set 2000 for > hd > >>>>> size, so I have 3.5 TB free to assign later. > >>>>> > >>>>> my layout is as follows: > >>>>> root at pve:~# lsblk > >>>>> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > >>>>> sda 8:0 0 5.5T 0 disk > >>>>> ??sda1 8:1 0 1007K 0 part > >>>>> ??sda2 8:2 0 512M 0 part > >>>>> ??sda3 8:3 0 2T 0 part > >>>>> ??pve-swap 253:0 0 8G 0 lvm [SWAP] > >>>>> ??pve-root 253:1 0 1T 0 lvm / > >>>>> ??pve-data_tmeta 253:2 0 9G 0 lvm > >>>>> ? ??pve-data 253:4 0 949.6G 0 lvm > >>>>> ??pve-data_tdata 253:3 0 949.6G 0 lvm > >>>>> ??pve-data 253:4 0 949.6G 0 lvm > >>>>> sr0 11:0 1 1024M 0 rom > >>>>> > >>>>> My question is: > >>>>> Is it possible to expand sda3 partition later without service outage > ? > >>>>> Is it possible to expand pve group on sda3 partition ? > >>>> You don't need to expand sda3 really. You can just create a new > >>>> partition, create a new PV with it and add the new PV to pve VG. > >>>> > >>>>> In case to create a proxmox cluster, what should I do with that 3.5 > TB > >>>> free > >>>>> ? > >>>> I don't know really how to reply to this. If you're building a > cluster, > >>>> I suggest you configure some kind of shared storage; NFS server or > Ceph > >>>> cluster for example. > >>>> > >>>>> Is there a best partition type suited for this ? Can I do it without > >>>>> service outage? > >>>> For what? > >>>> > >>>>> I have not any service running yet , so I can experiment what it > takes. > >>>>> Any thought about this would be great. > >>>> Maybe you can start telling us your target use for this > server/cluster. > >>>> Also some detailed spec of the server would help; for example does it > >>>> have a RAID card with more than one disk, or you're using a 6TB single > >>>> disk? > >>>> > >>>> Cheers > >>>> Eneko > >>>> > >>>> -- > >>>> Zuzendari Teknikoa / Director T?cnico > >>>> Binovo IT Human Project, S.L. > >>>> Telf. 943569206 > >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >>>> www.binovo.es > >>>> > >>>> _______________________________________________ > >>>> pve-user mailing list > >>>> pve-user at pve.proxmox.com > >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>>> > >>> _______________________________________________ > >>> pve-user mailing list > >>> pve-user at pve.proxmox.com > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > >> -- > >> Zuzendari Teknikoa / Director T?cnico > >> Binovo IT Human Project, S.L. > >> Telf. 943569206 > >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >> www.binovo.es > >> > >> _______________________________________________ > >> pve-user mailing list > >> pve-user at pve.proxmox.com > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > -- > Zuzendari Teknikoa / Director T?cnico > Binovo IT Human Project, S.L. > Telf. 943569206 > Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > www.binovo.es > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From leandro at tecnetmza.com.ar Mon Mar 2 14:37:53 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Mon, 2 Mar 2020 10:37:53 -0300 Subject: [PVE-User] Create proxmox cluster / storage question. In-Reply-To: <312fa294-b2ac-5fd5-af79-ed7b2ada39a4@aasen.cx> References: <7d5075bd-0442-6902-8296-b4cad572adba@binovo.es> <312fa294-b2ac-5fd5-af79-ed7b2ada39a4@aasen.cx> Message-ID: Thanks for the feedback! El lun., 2 mar. 2020 a las 5:21, Ronny Aasen () escribi?: > On 28.02.2020 15:25, Leandro Roggerone wrote: > > Dear Eneko: > > When you talk about redundacy and availability, do you want... > > - HA? (automatic restart of VMs in the other node in case one server > fails) > > - Be able to move VMs from one server to the other "fast" (without > > copying the disks)? > > Yes , this is what I want. > > So regarding my original layout from my 5.5Tb storage: > > Im using 1T for LVM , 1TB for LVM-thin and 3.5 TB unassigned space, is > it > > ok to use this unassigned space for a ceph ? > > Can I set it later ? with server on production ? > > > > Other: > > Using an NFS system, means to have an external server running a file > sistem? > > So you should have at least two servers for the cluster and one for the > > file system? > > It seems to me that using ceph has a better redundancy plan an it is > easier > > to deploy since I only need two servers. (am i right?). > > Thanks! > > > > El vie., 28 feb. 2020 a las 11:06, Eneko Lacunza () > > escribi?: > > > >> Hola Leandro, > >> > > > for NFS you have just moved your SPOF from proxmox to NFS. so unless you > have a HA NFS solution like a SAN with controller failover i would not > think about NFS. > > for ceph you need 4 server to a have a proper fault tolerant HA system > you have 3 copies of data. and a node as a failuredomain. you want the > disk layout to be fairly identical on the servers. > > you can in theory run ceph on 3 nodes, but you will have a degraded > cluster when you loose a node. and basically neuter the best features of > ceph. so use 4 or more nodes. ceph really shines when you get to 8-10 > nodes. also most defaults assume a 8 node cluster, so you may need to > tweak some warning levels when you want to run with fewer. > > Also proxmox HA requires quorum. so proxmox also need 3 or more nodes. > if you only have 2 nodes you can not do a reliable HA solution. But you > can do a zfs replication system, where the running vm is replicated to > the other server periodically. and you can start that backup vm when the > main one dies. > > Good luck > Ronny Aasen > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gilberto.nunes32 at gmail.com Mon Mar 2 19:09:50 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Mon, 2 Mar 2020 15:09:50 -0300 Subject: [PVE-User] VM Console - black screen Message-ID: Hi there I have two pve server all fully update, running PVE 6.1-7. Both servers are in cluster. The problem is when select VM Console nothing happens, and turns in black-gray screen! Any one has the same behavior?? Thanks a lot --- Gilberto Nunes Ferreira From jonny at lemonjuice.tk Mon Mar 2 19:15:21 2020 From: jonny at lemonjuice.tk (Jonny Proud) Date: Mon, 2 Mar 2020 18:15:21 +0000 Subject: [PVE-User] VM Console - black screen In-Reply-To: References: Message-ID: <339e6c6a-2d2e-4c99-b376-b948d19d93c3@lemonjuice.tk> Hi Gilberto, Mt 6.1.7 install works fine. Have you tried a different browser? On 02/03/2020 18:09, Gilberto Nunes wrote: > Hi there > > I have two pve server all fully update, running PVE 6.1-7. > Both servers are in cluster. > The problem is when select VM Console nothing happens, and turns in > black-gray screen! > Any one has the same behavior?? > Thanks a lot > --- > Gilberto Nunes Ferreira > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Jonny From gilberto.nunes32 at gmail.com Mon Mar 2 19:22:08 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Mon, 2 Mar 2020 15:22:08 -0300 Subject: [PVE-User] VM Console - black screen In-Reply-To: References: Message-ID: Yes! I have tried Firefox too! Same problem! --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em seg., 2 de mar. de 2020 ?s 15:15, Jonny Proud via pve-user < pve-user at pve.proxmox.com> escreveu: > > > > ---------- Forwarded message ---------- > From: Jonny Proud > To: pve-user at pve.proxmox.com > Cc: > Bcc: > Date: Mon, 2 Mar 2020 18:15:21 +0000 > Subject: Re: [PVE-User] VM Console - black screen > Hi Gilberto, > > Mt 6.1.7 install works fine. Have you tried a different browser? > > On 02/03/2020 18:09, Gilberto Nunes wrote: > > Hi there > > > > I have two pve server all fully update, running PVE 6.1-7. > > Both servers are in cluster. > > The problem is when select VM Console nothing happens, and turns in > > black-gray screen! > > Any one has the same behavior?? > > Thanks a lot > > --- > > Gilberto Nunes Ferreira > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > -- > Jonny > > > > > ---------- Forwarded message ---------- > From: Jonny Proud via pve-user > To: pve-user at pve.proxmox.com > Cc: Jonny Proud > Bcc: > Date: Mon, 2 Mar 2020 18:15:21 +0000 > Subject: Re: [PVE-User] VM Console - black screen > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From alex at calicolabs.com Mon Mar 2 19:34:52 2020 From: alex at calicolabs.com (Alex Chekholko) Date: Mon, 2 Mar 2020 10:34:52 -0800 Subject: [PVE-User] do_IRQ: No irq handler for vector In-Reply-To: <20200302114933.GK3973@sv.lnf.it> References: <20200302114933.GK3973@sv.lnf.it> Message-ID: Hi Marco, I have seen this on various kernel versions (CentOS / Debian) and on various hardware; my understanding is that it is safe to ignore. Regards, Alex On Mon, Mar 2, 2020 at 3:50 AM Marco Gaiarin wrote: > > I've setup an home server, using an old HP ProLiant ML110G6 server, > with PVE6, kernel 5.3.18-2-pve. > > > I reveive, spuriously, logs like: > > root at ino:~# grep do_IRQ /var/log/kern.log > Mar 1 05:10:51 ino kernel: [197698.269626] do_IRQ: 3.40 No irq handler > for vector > Mar 1 08:12:52 ino kernel: [208619.012511] do_IRQ: 2.41 No irq handler > for vector > Mar 1 11:59:10 ino kernel: [222196.730017] do_IRQ: 3.40 No irq handler > for vector > Mar 1 17:09:00 ino kernel: [240787.084289] do_IRQ: 2.41 No irq handler > for vector > Mar 1 17:19:10 ino kernel: [241396.356441] do_IRQ: 0.41 No irq handler > for vector > Mar 1 23:33:00 ino kernel: [263826.632318] do_IRQ: 1.40 No irq handler > for vector > Mar 2 00:08:51 ino kernel: [265976.985582] do_IRQ: 3.41 No irq handler > for vector > Mar 2 05:40:21 ino kernel: [285866.764386] do_IRQ: 1.41 No irq handler > for vector > Mar 2 08:48:00 ino kernel: [297125.421168] do_IRQ: 2.40 No irq handler > for vector > Mar 2 08:52:01 ino kernel: [297366.060392] do_IRQ: 2.41 No irq handler > for vector > Mar 2 11:32:01 ino kernel: [306965.868404] do_IRQ: 0.39 No irq handler > for vector > > Server work as expected, without any trouble, crash, ... > > > AFAI've understood, first numer is the core (i have 4), second is the > interrupt vector. > > Googling around i've hit: > > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1781016 > > and effectively i've many 'serial' USB interfaces connected (2 USB > serial port, an APC UPS, a GSM dongle). > > > I've tried to, using the irq vecrot, to understand the peripheral, but > the command proposed: > > grep 35 /sys/kernel/debug/irq/irqs/* > > does not work, there's no more '/sys/kernel/debug/irq/irqs/*' interface > in sysfs. > > > Someone have some hint? Thanks. > > -- > dott. Marco Gaiarin GNUPG Key ID: > 240A3D66 > Associazione ``La Nostra Famiglia'' > http://www.lanostrafamiglia.it/ > Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento > (PN) > marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f > +39-0434-842797 > > Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! > http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 > (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gilberto.nunes32 at gmail.com Mon Mar 2 19:38:01 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Mon, 2 Mar 2020 15:38:01 -0300 Subject: [PVE-User] VM Console - black screen In-Reply-To: References: Message-ID: It's seems the network don't support MTU 9000. I have disable it and back to normal... (syslog) Mar 2 14:04:38 tiger corosync[3995]: [KNET ] pmtud: This can be caused by this node interface MTU too big or a network device that does not support or has been misconfigured to manage MTU of this size, or packet loss. knet will continue to run but performances might be affected. --- Gilberto Nunes Ferreira Em seg., 2 de mar. de 2020 ?s 15:22, Gilberto Nunes < gilberto.nunes32 at gmail.com> escreveu: > Yes! I have tried Firefox too! Same problem! > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em seg., 2 de mar. de 2020 ?s 15:15, Jonny Proud via pve-user < > pve-user at pve.proxmox.com> escreveu: > >> >> >> >> ---------- Forwarded message ---------- >> From: Jonny Proud >> To: pve-user at pve.proxmox.com >> Cc: >> Bcc: >> Date: Mon, 2 Mar 2020 18:15:21 +0000 >> Subject: Re: [PVE-User] VM Console - black screen >> Hi Gilberto, >> >> Mt 6.1.7 install works fine. Have you tried a different browser? >> >> On 02/03/2020 18:09, Gilberto Nunes wrote: >> > Hi there >> > >> > I have two pve server all fully update, running PVE 6.1-7. >> > Both servers are in cluster. >> > The problem is when select VM Console nothing happens, and turns in >> > black-gray screen! >> > Any one has the same behavior?? >> > Thanks a lot >> > --- >> > Gilberto Nunes Ferreira >> > _______________________________________________ >> > pve-user mailing list >> > pve-user at pve.proxmox.com >> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >> -- >> Jonny >> >> >> >> >> ---------- Forwarded message ---------- >> From: Jonny Proud via pve-user >> To: pve-user at pve.proxmox.com >> Cc: Jonny Proud >> Bcc: >> Date: Mon, 2 Mar 2020 18:15:21 +0000 >> Subject: Re: [PVE-User] VM Console - black screen >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > From gaio at sv.lnf.it Tue Mar 3 11:49:30 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Tue, 3 Mar 2020 11:49:30 +0100 Subject: [PVE-User] do_IRQ: No irq handler for vector In-Reply-To: References: <20200302114933.GK3973@sv.lnf.it> Message-ID: <20200303104930.GD2818@sv.lnf.it> Mandi! Alex Chekholko via pve-user In chel di` si favelave... > I have seen this on various kernel versions (CentOS / Debian) and on > various hardware; my understanding is that it is safe to ignore. OK. Thanks! -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From devzero at web.de Tue Mar 3 13:30:46 2020 From: devzero at web.de (Roland @web.de) Date: Tue, 3 Mar 2020 13:30:46 +0100 Subject: [PVE-User] pve-zsync - no zvol replication ? Message-ID: <9b0c448e-7ba2-2e08-ffdf-92a930ebdb7a@web.de> hello, apparently pve-zsync does not seem to replicate zfs zvols (but only regular zfs datasets) !? since zvols is default for VMs, i'm curious if this is a bug or a (missing) feature? !? i tried syncronizing the following way: pve-zsync sync -dest pve-node2:hddpool/pve-node1-zsync/hddpool -source hddpool/vms -v source: root at pve-node1# zfs list -r hddpool/vms NAME??????????????????????? USED? AVAIL???? REFER MOUNTPOINT hddpool/vms??????????????? 34.2G? 3.42T???? 30.6K /hddpool/vms hddpool/vms/vm-100-disk-0? 34.2G? 3.46T???? 1.18G? - dest: root at pve-node2# zfs list -r hddpool/pve-node1-zsync NAME????????????????????????????????????? USED? AVAIL REFER? MOUNTPOINT hddpool/pve-node1-zsync?????????????????? 156K? 3.44T 32.0K? /hddpool/pve-node1-zsync hddpool/pve-node1-zsync/hddpool???? 61.3K? 3.44T 30.6K? /hddpool/pve-node1-zsync/hddpool hddpool/pve-node1-zsync/hddpool/vms? 30.6K? 3.44T 30.6K? /hddpool/pve-node1-zsync/hddpool/vms the documentation at https://pve.proxmox.com/wiki/PVE-zsync? is unclear about that. i'm asking since i want to use builtin utils instead of 3rd party tools for that job. regards roland From gilberto.nunes32 at gmail.com Tue Mar 3 15:16:15 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Tue, 3 Mar 2020 11:16:15 -0300 Subject: [PVE-User] VM Console - black screen In-Reply-To: References: Message-ID: Hi there again Well.... After a while, the problems comes up again! We have plug a notebook directly into Proxmox server and nevertheless the problem persists! Anybody has some clue that can help with this issue! --- Gilberto Nunes Ferreira Em seg., 2 de mar. de 2020 ?s 15:38, Gilberto Nunes < gilberto.nunes32 at gmail.com> escreveu: > It's seems the network don't support MTU 9000. I have disable it and back > to normal... > > (syslog) > Mar 2 14:04:38 tiger corosync[3995]: [KNET ] pmtud: This can be caused > by this node interface MTU too big or a network device that does not > support or has been misconfigured to manage MTU of this size, or packet > loss. knet will continue to run but performances might be affected. > --- > Gilberto Nunes Ferreira > > > Em seg., 2 de mar. de 2020 ?s 15:22, Gilberto Nunes < > gilberto.nunes32 at gmail.com> escreveu: > >> Yes! I have tried Firefox too! Same problem! >> --- >> Gilberto Nunes Ferreira >> >> (47) 3025-5907 >> (47) 99676-7530 - Whatsapp / Telegram >> >> Skype: gilberto.nunes36 >> >> >> >> >> >> Em seg., 2 de mar. de 2020 ?s 15:15, Jonny Proud via pve-user < >> pve-user at pve.proxmox.com> escreveu: >> >>> >>> >>> >>> ---------- Forwarded message ---------- >>> From: Jonny Proud >>> To: pve-user at pve.proxmox.com >>> Cc: >>> Bcc: >>> Date: Mon, 2 Mar 2020 18:15:21 +0000 >>> Subject: Re: [PVE-User] VM Console - black screen >>> Hi Gilberto, >>> >>> Mt 6.1.7 install works fine. Have you tried a different browser? >>> >>> On 02/03/2020 18:09, Gilberto Nunes wrote: >>> > Hi there >>> > >>> > I have two pve server all fully update, running PVE 6.1-7. >>> > Both servers are in cluster. >>> > The problem is when select VM Console nothing happens, and turns in >>> > black-gray screen! >>> > Any one has the same behavior?? >>> > Thanks a lot >>> > --- >>> > Gilberto Nunes Ferreira >>> > _______________________________________________ >>> > pve-user mailing list >>> > pve-user at pve.proxmox.com >>> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >>> -- >>> Jonny >>> >>> >>> >>> >>> ---------- Forwarded message ---------- >>> From: Jonny Proud via pve-user >>> To: pve-user at pve.proxmox.com >>> Cc: Jonny Proud >>> Bcc: >>> Date: Mon, 2 Mar 2020 18:15:21 +0000 >>> Subject: Re: [PVE-User] VM Console - black screen >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> From gianni.milo22 at gmail.com Tue Mar 3 16:35:40 2020 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Tue, 3 Mar 2020 15:35:40 +0000 Subject: [PVE-User] pve-zsync - no zvol replication ? In-Reply-To: <9b0c448e-7ba2-2e08-ffdf-92a930ebdb7a@web.de> References: <9b0c448e-7ba2-2e08-ffdf-92a930ebdb7a@web.de> Message-ID: Could the following be the reason ...? zvols are being supported as far as I can tell... Limitations - not possible to sync recursive G. On Tue, 3 Mar 2020 at 12:30, Roland @web.de wrote: > hello, > > apparently pve-zsync does not seem to replicate zfs zvols (but only > regular zfs datasets) !? > > since zvols is default for VMs, i'm curious if this is a bug or a > (missing) feature !? > > > i tried syncronizing the following way: > > > pve-zsync sync -dest pve-node2:hddpool/pve-node1-zsync/hddpool -source > hddpool/vms -v > > source: > root at pve-node1# zfs list -r hddpool/vms > NAME USED AVAIL REFER MOUNTPOINT > hddpool/vms 34.2G 3.42T 30.6K /hddpool/vms > hddpool/vms/vm-100-disk-0 34.2G 3.46T 1.18G - > > dest: > root at pve-node2# zfs list -r hddpool/pve-node1-zsync > NAME USED AVAIL REFER MOUNTPOINT > hddpool/pve-node1-zsync 156K 3.44T 32.0K > /hddpool/pve-node1-zsync > hddpool/pve-node1-zsync/hddpool 61.3K 3.44T 30.6K > /hddpool/pve-node1-zsync/hddpool > hddpool/pve-node1-zsync/hddpool/vms 30.6K 3.44T 30.6K > /hddpool/pve-node1-zsync/hddpool/vms > > the documentation at https://pve.proxmox.com/wiki/PVE-zsync is unclear > about that. > > i'm asking since i want to use builtin utils instead of 3rd party tools > for that job. > > regards > roland > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From devzero at web.de Tue Mar 3 17:30:41 2020 From: devzero at web.de (Roland @web.de) Date: Tue, 3 Mar 2020 17:30:41 +0100 Subject: [PVE-User] pve-zsync - no zvol replication ? In-Reply-To: References: <9b0c448e-7ba2-2e08-ffdf-92a930ebdb7a@web.de> Message-ID: <1fecc36b-6be4-6bfe-380e-61de61506627@web.de> could be. ok - but if they are supported, how do i use pve-zsync with zvols then ? Am 03.03.20 um 16:35 schrieb Gianni Milo: > Could the following be the reason ...? zvols are being supported as far as > I can tell... > Limitations > > - not possible to sync recursive > > > G. > > > On Tue, 3 Mar 2020 at 12:30, Roland @web.de wrote: > >> hello, >> >> apparently pve-zsync does not seem to replicate zfs zvols (but only >> regular zfs datasets) !? >> >> since zvols is default for VMs, i'm curious if this is a bug or a >> (missing) feature !? >> >> >> i tried syncronizing the following way: >> >> >> pve-zsync sync -dest pve-node2:hddpool/pve-node1-zsync/hddpool -source >> hddpool/vms -v >> >> source: >> root at pve-node1# zfs list -r hddpool/vms >> NAME USED AVAIL REFER MOUNTPOINT >> hddpool/vms 34.2G 3.42T 30.6K /hddpool/vms >> hddpool/vms/vm-100-disk-0 34.2G 3.46T 1.18G - >> >> dest: >> root at pve-node2# zfs list -r hddpool/pve-node1-zsync >> NAME USED AVAIL REFER MOUNTPOINT >> hddpool/pve-node1-zsync 156K 3.44T 32.0K >> /hddpool/pve-node1-zsync >> hddpool/pve-node1-zsync/hddpool 61.3K 3.44T 30.6K >> /hddpool/pve-node1-zsync/hddpool >> hddpool/pve-node1-zsync/hddpool/vms 30.6K 3.44T 30.6K >> /hddpool/pve-node1-zsync/hddpool/vms >> >> the documentation at https://pve.proxmox.com/wiki/PVE-zsync is unclear >> about that. >> >> i'm asking since i want to use builtin utils instead of 3rd party tools >> for that job. >> >> regards >> roland >> >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gaio at sv.lnf.it Tue Mar 3 18:50:14 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Tue, 3 Mar 2020 18:50:14 +0100 Subject: [PVE-User] Debian buster, systemd, container and nesting=1 In-Reply-To: <20200227162608.3135a2ea@rosa.proxmox.com> References: <20200218154426.GG3479@sv.lnf.it> <20200225184341.4b1db066@rosa.proxmox.com> <20200226110156.GC3896@sv.lnf.it> <20200227162608.3135a2ea@rosa.proxmox.com> Message-ID: <20200303175014.GZ2818@sv.lnf.it> Mandi! Stoiko Ivanov In chel di` si favelave... > AFAICU one robust (although not very performant way) to run a AD DC with > NTACLs on a unprivileged container would be to use the xattr_tdb module > (not actively tested though): > https://wiki.samba.org/index.php/Using_the_xattr_tdb_VFS_Module Specifically asked in samba ML; xattr_tdb i a test module, broken, that HAVE NOT to be used in production. The only ''supported'' way to run Samba AD DC is via filesystem XATTR. Also, seems the same 'troubles' hit BSD Jails: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=220844 https://bugzilla.samba.org/show_bug.cgi?id=12912 see: https://lists.samba.org/archive/samba/2020-February/228653.html > They are independent - a good explanation of what nesting does can be > found in our source: > https://git.proxmox.com/?p=pve-container.git;a=blob;f=src/PVE/LXC.pm;h=34ca2a357294f63e8b49d965bd54c24905642e17;hb=HEAD#l581 > (it allows among other things to mount /proc, and /sys, which is > problematic for privileged containers > > The issue with apache('s systemd-unit) in the privileged container, is > that the mount is denied by apparmor (the apparmor rules are stricter for > privileged containers, than for unprivileged, because if someone breaks > out of an unprivileged container they are only a regular user on the host) > > I hope this explains it. Ahem, no. ;-) But indeed is my fault that i know very little about systemd, apparmor and all those new wizardry... ;-) -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From ooo at mechtilde.de Tue Mar 3 21:14:03 2020 From: ooo at mechtilde.de (Mechtilde) Date: Tue, 3 Mar 2020 21:14:03 +0100 Subject: [PVE-User] Installation a VM at proxmox Message-ID: <755a5878-ac4b-39db-dc74-af22d51cb677@mechtilde.de> Hello I'm very new to this task. In the first step I want to run an application which need Debian 9 Stretch on a recent stable system (Debian 10, Buster) Later on I want to use some more applications which have different dependencies. I did a research and found proxmox. I installed it on an machine as dualboot beside Windows 10. Proxmox itself is running. I can start it with ahttps://:8006. I can provide the downloaded iso image to proxmox. I can also use the dialog to create a VM and connect it to the *.iso. After that I get the message "Failled to connect to server and the splash screen for noVNC. The status message is: "Failed to run vncproxy." I looked many tutorials and videos on the net but can?t find a hint to what went wrong. I hope someone here can give me a hint. thanks in advance and kind regards -- Mechtilde Stehmann ## Apache OpenOffice ## Freie Office Suite f?r Linux, MacOSX, Windows ## Debian Developer ## PGP encryption welcome ## F0E3 7F3D C87A 4998 2899 39E7 F287 7BBA 141A AD7F -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From f.gruenbichler at proxmox.com Wed Mar 4 07:28:45 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Wed, 04 Mar 2020 07:28:45 +0100 Subject: [PVE-User] Installation a VM at proxmox In-Reply-To: <755a5878-ac4b-39db-dc74-af22d51cb677@mechtilde.de> References: <755a5878-ac4b-39db-dc74-af22d51cb677@mechtilde.de> Message-ID: <1583303134.3tqgqi4yew.astroid@nora.none> On March 3, 2020 9:14 pm, Mechtilde wrote: > I can also use the dialog to create a VM and connect it to the *.iso. > > After that I get the message "Failled to connect to server and the > splash screen for noVNC. did the VM actually start? you should have a visual indication of that in the left side tree (instead of the greyed out, inactive icon there should be an active icon with a small 'play symbol' overlayed), and in the VM view itself (after you selected the VM in the left-side tree) the summary should say 'running' instead of 'stopped'. you can also check by running "qm list" or "qm status " on the CLI. > The status message is: "Failed to run vncproxy." this message usually occurs when the VM is not running. From laurentfdumont at gmail.com Wed Mar 4 14:04:33 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Wed, 4 Mar 2020 08:04:33 -0500 Subject: [PVE-User] Installation a VM at proxmox In-Reply-To: <1583303134.3tqgqi4yew.astroid@nora.none> References: <755a5878-ac4b-39db-dc74-af22d51cb677@mechtilde.de> <1583303134.3tqgqi4yew.astroid@nora.none> Message-ID: If this is a regular laptop, make sure you have enabled VTX in the BIOS options. On Wed, Mar 4, 2020, 2:46 AM Fabian Gr?nbichler wrote: > On March 3, 2020 9:14 pm, Mechtilde wrote: > > I can also use the dialog to create a VM and connect it to the *.iso. > > > > After that I get the message "Failled to connect to server and the > > splash screen for noVNC. > > did the VM actually start? you should have a visual indication of that > in the left side tree (instead of the greyed out, inactive icon there > should be an active icon with a small 'play symbol' overlayed), and in > the VM view itself (after you selected the VM in the left-side tree) the > summary should say 'running' instead of 'stopped'. > > you can also check by running "qm list" or "qm status " on the > CLI. > > > The status message is: "Failed to run vncproxy." > > this message usually occurs when the VM is not running. > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From luiscoralle at fi.uncoma.edu.ar Wed Mar 4 15:21:57 2020 From: luiscoralle at fi.uncoma.edu.ar (Luis G. Coralle) Date: Wed, 4 Mar 2020 11:21:57 -0300 Subject: Error starting VM Message-ID: Hello! I'm trying to start windows 10 VM and I get: TASK ERROR: timeout waiting on systemd The VM was running fine. Last day it crash. I rebooted it and now it won't start... PVE version is 5.4-13 Did someone have the same thing? Thanks -- Luis G. Coralle Secretar?a de TIC Facultad de Inform?tica Universidad Nacional del Comahue (+54) 299-4490300 Int 647 From leandro at tecnetmza.com.ar Wed Mar 4 16:36:49 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Wed, 4 Mar 2020 12:36:49 -0300 Subject: [PVE-User] reduce my lvm-thin partition to enlarge my root lvm partition. Message-ID: Regarding current LVM scheme: root at pve:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-a-tz-- <2.80t 0.00 0.18 root pve -wi-ao---- 1.07t swap pve -wi-ao---- 8.00g I would like to reduce data to 2T and enlarge root to 2T. root partition is LVM and data partition is LVM-thin. How should I reduce my data partition ? Reading some blogs / tutos , like https://www.linuxtechi.com/reduce-size-lvm-partition/ It is explained how to reduce LVM partition but not LVMthin partition. So , in order to reduce my data (LVM partition) I need to umount the partition to reduce later. Problem is , I can not find the path to umount the partition neither on lsblk nor df -h ... there is not mount point. Does anyone succeeded doing this ? Can you point some doc ? command list ? Regards, Leandro. Following , lvdisplay and vgdisplay output. root at pve:~# vgdisplay --- Volume group --- VG Name pve System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 8 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size <3.91 TiB PE Size 4.00 MiB Total PE 1023871 Alloc PE / Size 1023871 / <3.91 TiB Free PE / Size 0 / 0 VG UUID T6m6jY-W5k9-7J2p-TUIr-9g5Q-fm0g-HHt72c root at pve:~# lvdisplay --- Logical volume --- LV Path /dev/pve/swap LV Name swap VG Name pve LV UUID rJAeXM-9iQj-9gyN-WEZV-zX6q-omUl-6lSwyj LV Write Access read/write LV Creation host, time proxmox, 2020-03-04 11:24:31 -0300 LV Status available # open 2 LV Size 8.00 GiB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/pve/root LV Name root VG Name pve LV UUID kOXw32-Wrbk-0XqI-nnBq-sYLU-lKgH-YTBvLf LV Write Access read/write LV Creation host, time proxmox, 2020-03-04 11:24:31 -0300 LV Status available # open 1 LV Size 1.07 TiB Current LE 280672 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Name data VG Name pve LV UUID vi0DaQ-eeii-Q2eT-26qy-zk4F-SMxs-NdmsfH LV Write Access read/write LV Creation host, time proxmox, 2020-03-04 11:24:31 -0300 LV Pool metadata data_tmeta LV Pool data data_tdata LV Status available # open 0 LV Size <2.80 TiB Allocated pool data 0.00% Allocated metadata 0.18% Current LE 733055 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> From lists at merit.unu.edu Wed Mar 4 16:39:23 2020 From: lists at merit.unu.edu (mj) Date: Wed, 4 Mar 2020 16:39:23 +0100 Subject: [PVE-User] osd replacement to bluestore or filestore In-Reply-To: References: <44474a04-596f-52db-c242-9c85053b197a@merit.unu.edu> Message-ID: <35d07c4a-26c8-7530-a2e7-8a53cdf03531@merit.unu.edu> Hi Ronny and Eneko, Just to say: thank you for your replies! MJ On 3/2/20 9:26 AM, Eneko Lacunza wrote: > Hi MJ, > > El 29/2/20 a las 12:21, mj escribi?: >> Hi, >> >> We have a failing filestore OSD HDD in our pve 5.4 cluster on ceph >> 12.2.13. >> >> I have ordered a replacement SSD, but we have the following doubt: >> >> Should we now replace the filestore HDD (journal on an SSD) with a >> bluestore SSD? Or should we keep the new SSD in filestore config, in >> order to minimise the time we run in 'mixed mode'. >> >> We have the intention to replace all filestore HDD OSDs with bluestore >> SSDs, but not short term, starting in half year or so. >> >> So the question is really: can we run mixed bluestore/filestore ceph >> cluster for an extended period of time? Anything particular to consider? >> > We have mixed bluestore/filestore clusters in production. Other than the > increased RAM use by bluestore OSDs, we haven't noticed anything > special. There was a CRC-related bug generating false warnings in > filestore OSDs but it was fixed some time ago and eventually warnings > stopped popping up. > > Cheers > Eneko > > From gilberto.nunes32 at gmail.com Wed Mar 4 16:43:01 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 4 Mar 2020 12:43:01 -0300 Subject: [PVE-User] Error starting VM In-Reply-To: References: Message-ID: systemctl status qemu.slice --- Gilberto Nunes Ferreira Em qua., 4 de mar. de 2020 ?s 11:23, Luis G. Coralle via pve-user < pve-user at pve.proxmox.com> escreveu: > > > > ---------- Forwarded message ---------- > From: "Luis G. Coralle" > To: PVE User List > Cc: > Bcc: > Date: Wed, 4 Mar 2020 11:21:57 -0300 > Subject: Error starting VM > Hello! > I'm trying to start windows 10 VM and I get: > TASK ERROR: timeout waiting on systemd > The VM was running fine. Last day it crash. I rebooted it and now it > won't start... > > PVE version is 5.4-13 > Did someone have the same thing? > Thanks > > -- > Luis G. Coralle > Secretar?a de TIC > Facultad de Inform?tica > Universidad Nacional del Comahue > (+54) 299-4490300 Int 647 > > > > ---------- Forwarded message ---------- > From: "Luis G. Coralle via pve-user" > To: PVE User List > Cc: "Luis G. Coralle" > Bcc: > Date: Wed, 4 Mar 2020 11:21:57 -0300 > Subject: [PVE-User] Error starting VM > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From laurentfdumont at gmail.com Wed Mar 4 18:25:21 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Wed, 4 Mar 2020 12:25:21 -0500 Subject: [PVE-User] Error starting VM In-Reply-To: References: Message-ID: You should be seeing a bit more output : coldadmin at bacon:~$ sudo systemctl status qemu.slice ? qemu.slice Loaded: loaded Active: active since Sat 2020-02-08 11:26:41 EST; 3 weeks 4 days ago Tasks: 52 Memory: 30.0G CGroup: /qemu.slice ??102.scope ? ??2660 /usr/bin/kvm -id 102 -name foreman -chardev socket,id=qmp,path=/var/run/qemu-server/102.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile ??126.scope ? ??2911 /usr/bin/kvm -id 126 -name pbx -chardev socket,id=qmp,path=/var/run/qemu-server/126.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var ??134.scope ? ??14073 /usr/bin/kvm -id 134 -name kolla-compute001 -chardev socket,id=qmp,path=/var/run/qemu-server/134.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control ??139.scope ? ??14030 /usr/bin/kvm -id 139 -name kolla-compute002 -chardev socket,id=qmp,path=/var/run/qemu-server/139.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control ??155.scope ??2967 /usr/bin/kvm -id 155 -name rancher01 -chardev socket,id=qmp,path=/var/run/qemu-server/155.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfil Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable. On Wed, Mar 4, 2020 at 10:44 AM Gilberto Nunes wrote: > systemctl status qemu.slice > > --- > Gilberto Nunes Ferreira > > > Em qua., 4 de mar. de 2020 ?s 11:23, Luis G. Coralle via pve-user < > pve-user at pve.proxmox.com> escreveu: > > > > > > > > > ---------- Forwarded message ---------- > > From: "Luis G. Coralle" > > To: PVE User List > > Cc: > > Bcc: > > Date: Wed, 4 Mar 2020 11:21:57 -0300 > > Subject: Error starting VM > > Hello! > > I'm trying to start windows 10 VM and I get: > > TASK ERROR: timeout waiting on systemd > > The VM was running fine. Last day it crash. I rebooted it and now it > > won't start... > > > > PVE version is 5.4-13 > > Did someone have the same thing? > > Thanks > > > > -- > > Luis G. Coralle > > Secretar?a de TIC > > Facultad de Inform?tica > > Universidad Nacional del Comahue > > (+54) 299-4490300 Int 647 > > > > > > > > ---------- Forwarded message ---------- > > From: "Luis G. Coralle via pve-user" > > To: PVE User List > > Cc: "Luis G. Coralle" > > Bcc: > > Date: Wed, 4 Mar 2020 11:21:57 -0300 > > Subject: [PVE-User] Error starting VM > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From ooo at mechtilde.de Wed Mar 4 18:54:59 2020 From: ooo at mechtilde.de (Mechtilde) Date: Wed, 4 Mar 2020 18:54:59 +0100 Subject: [PVE-User] Installation a VM at proxmox In-Reply-To: References: <755a5878-ac4b-39db-dc74-af22d51cb677@mechtilde.de> <1583303134.3tqgqi4yew.astroid@nora.none> Message-ID: <2668fbe9-0f54-6a93-a656-a518965ea60c@mechtilde.de> Hello, Am 04.03.20 um 14:04 schrieb Laurent Dumont: > If this is a regular laptop, make sure you have enabled VTX in the BIOS > options. First I only found the "Diagnostic Tool" with F10. After using the search engine again I found the hint to use F1. There I found the option to enable virtualization. Thanks for that hints Kind regards > > On Wed, Mar 4, 2020, 2:46 AM Fabian Gr?nbichler > wrote: > >> On March 3, 2020 9:14 pm, Mechtilde wrote: >>> I can also use the dialog to create a VM and connect it to the *.iso. >>> >>> After that I get the message "Failled to connect to server and the >>> splash screen for noVNC. >> >> did the VM actually start? you should have a visual indication of that >> in the left side tree (instead of the greyed out, inactive icon there >> should be an active icon with a small 'play symbol' overlayed), and in >> the VM view itself (after you selected the VM in the left-side tree) the >> summary should say 'running' instead of 'stopped'. >> >> you can also check by running "qm list" or "qm status " on the >> CLI. >> >>> The status message is: "Failed to run vncproxy." >> >> this message usually occurs when the VM is not running. > -- Mechtilde Stehmann ## Apache OpenOffice ## Freie Office Suite f?r Linux, MacOSX, Windows ## Debian Developer ## PGP encryption welcome ## F0E3 7F3D C87A 4998 2899 39E7 F287 7BBA 141A AD7F -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From leandro at tecnetmza.com.ar Thu Mar 5 17:26:55 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Thu, 5 Mar 2020 13:26:55 -0300 Subject: [PVE-User] Creating VM with thin provisioning (please help me understand it). Message-ID: Hi guys , here i am trying to get used to pve. (I came from VM ware EXSi). Im used to use "thin provisioning" option in vmware. It allowed me to set maximum drive space allocation but vm will use minimum and dynamically grow as guest OS needs . How should I accomplish same on pve ? I created a VM and set 200GB at the storage capacity but can not set any flag about thin provisioning. Now , If I list lslbl , it shows that created VM is taking the 200GB. root at pve:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 5.5T 0 disk ??sda1 8:1 0 1007K 0 part ??sda2 8:2 0 512M 0 part ??sda3 8:3 0 3.9T 0 part ??pve-swap 253:0 0 8G 0 lvm [SWAP] ??pve-root 253:1 0 1.2T 0 lvm / ??pve-data_tmeta 253:2 0 15.8G 0 lvm ? ??pve-data-tpool 253:4 0 1.9T 0 lvm ? ??pve-data 253:5 0 1.9T 0 lvm ? ??pve-vm--100--disk--0 253:6 0 200G 0 lvm ??pve-data_tdata 253:3 0 1.9T 0 lvm ??pve-data-tpool 253:4 0 1.9T 0 lvm ??pve-data 253:5 0 1.9T 0 lvm ??pve-vm--100--disk--0 253:6 0 200G 0 lvm sr0 11:0 1 1024M 0 rom I can see from gui , that is is possible to expand disk. So ... perhaps I should assign a little storage at beginning and then manually expand if needed. Please share your idea about this. I need to begin creating some VMs and work on them , I dont want to create in a wrong way and have to destroy later. Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> From mark at openvs.co.uk Thu Mar 5 17:43:08 2020 From: mark at openvs.co.uk (Mark Adams) Date: Thu, 5 Mar 2020 16:43:08 +0000 Subject: [PVE-User] Creating VM with thin provisioning (please help me understand it). In-Reply-To: References: Message-ID: Thin provisioning is set on the storage, it is a checkbox and of course it has to be a storage type than can be thin provisioned (ie lvmthin, zfs, ceph etc). Then every virtual disk that is created on that storage type is thin provisioned. Regards, Mark On Thu, 5 Mar 2020, 16:28 Leandro Roggerone, wrote: > Hi guys , here i am trying to get used to pve. (I came from VM ware EXSi). > Im used to use "thin provisioning" option in vmware. > It allowed me to set maximum drive space allocation but vm will use > minimum and dynamically grow as guest OS needs > . > How should I accomplish same on pve ? > I created a VM and set 200GB at the storage capacity but can not set any > flag about thin provisioning. > Now , If I list lslbl , it shows that created VM is taking the 200GB. > root at pve:~# lsblk > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > sda 8:0 0 5.5T 0 disk > ??sda1 8:1 0 1007K 0 part > ??sda2 8:2 0 512M 0 part > ??sda3 8:3 0 3.9T 0 part > ??pve-swap 253:0 0 8G 0 lvm [SWAP] > ??pve-root 253:1 0 1.2T 0 lvm / > ??pve-data_tmeta 253:2 0 15.8G 0 lvm > ? ??pve-data-tpool 253:4 0 1.9T 0 lvm > ? ??pve-data 253:5 0 1.9T 0 lvm > ? ??pve-vm--100--disk--0 253:6 0 200G 0 lvm > ??pve-data_tdata 253:3 0 1.9T 0 lvm > ??pve-data-tpool 253:4 0 1.9T 0 lvm > ??pve-data 253:5 0 1.9T 0 lvm > ??pve-vm--100--disk--0 253:6 0 200G 0 lvm > sr0 11:0 1 1024M 0 rom > > > I can see from gui , that is is possible to expand disk. > So ... perhaps I should assign a little storage at beginning and then > manually expand if needed. > Please share your idea about this. > I need to begin creating some VMs and work on them , I dont want to create > in a wrong way and have to destroy later. > > < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > Libre > de virus. www.avast.com > < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From leandro at tecnetmza.com.ar Thu Mar 5 20:44:46 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Thu, 5 Mar 2020 16:44:46 -0300 Subject: [PVE-User] Creating VM with thin provisioning (please help me understand it). In-Reply-To: References: Message-ID: Dear Mark, you said: "Then every virtual disk that is created on that storage type is thin provisioned." Ok ... good to hear that. So ... Lets suppose I created 5 VMs about 200Gb each on a 1TB space. Im using less than 50% on each VM and now I need to add a 6th VM ? Even Im using thin storage , I will have no free space. What can I do ? a) Should assign minimun capacity during creating time and enlarge later if needed ? b ) Should I look for unused space among existing VMs and reduce the disk ? Please let me know what is your criteria to assign storage to new VM. Leandro. El jue., 5 mar. 2020 a las 13:44, Mark Adams via pve-user (< pve-user at pve.proxmox.com>) escribi?: > > > > ---------- Forwarded message ---------- > From: Mark Adams > To: PVE User List > Cc: > Bcc: > Date: Thu, 5 Mar 2020 16:43:08 +0000 > Subject: Re: [PVE-User] Creating VM with thin provisioning (please help me > understand it). > Thin provisioning is set on the storage, it is a checkbox and of course it > has to be a storage type than can be thin provisioned (ie lvmthin, zfs, > ceph etc). > > Then every virtual disk that is created on that storage type is thin > provisioned. > > Regards, > Mark > > On Thu, 5 Mar 2020, 16:28 Leandro Roggerone, > wrote: > > > Hi guys , here i am trying to get used to pve. (I came from VM ware > EXSi). > > Im used to use "thin provisioning" option in vmware. > > It allowed me to set maximum drive space allocation but vm will use > > minimum and dynamically grow as guest OS needs > > . > > How should I accomplish same on pve ? > > I created a VM and set 200GB at the storage capacity but can not set any > > flag about thin provisioning. > > Now , If I list lslbl , it shows that created VM is taking the 200GB. > > root at pve:~# lsblk > > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > > sda 8:0 0 5.5T 0 disk > > ??sda1 8:1 0 1007K 0 part > > ??sda2 8:2 0 512M 0 part > > ??sda3 8:3 0 3.9T 0 part > > ??pve-swap 253:0 0 8G 0 lvm [SWAP] > > ??pve-root 253:1 0 1.2T 0 lvm / > > ??pve-data_tmeta 253:2 0 15.8G 0 lvm > > ? ??pve-data-tpool 253:4 0 1.9T 0 lvm > > ? ??pve-data 253:5 0 1.9T 0 lvm > > ? ??pve-vm--100--disk--0 253:6 0 200G 0 lvm > > ??pve-data_tdata 253:3 0 1.9T 0 lvm > > ??pve-data-tpool 253:4 0 1.9T 0 lvm > > ??pve-data 253:5 0 1.9T 0 lvm > > ??pve-vm--100--disk--0 253:6 0 200G 0 lvm > > sr0 11:0 1 1024M 0 rom > > > > > > I can see from gui , that is is possible to expand disk. > > So ... perhaps I should assign a little storage at beginning and then > > manually expand if needed. > > Please share your idea about this. > > I need to begin creating some VMs and work on them , I dont want to > create > > in a wrong way and have to destroy later. > > > > < > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > Libre > > de virus. www.avast.com > > < > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > ---------- Forwarded message ---------- > From: Mark Adams via pve-user > To: PVE User List > Cc: Mark Adams > Bcc: > Date: Thu, 5 Mar 2020 16:43:08 +0000 > Subject: Re: [PVE-User] Creating VM with thin provisioning (please help me > understand it). > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gianni.milo22 at gmail.com Thu Mar 5 22:32:51 2020 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Thu, 5 Mar 2020 21:32:51 +0000 Subject: [PVE-User] Creating VM with thin provisioning (please help me understand it). In-Reply-To: References: Message-ID: > So ... Lets suppose I created 5 VMs about 200Gb each on a 1TB space. > Im using less than 50% on each VM and now I need to add a 6th VM ? Even > Im using thin storage , I will have no free space. > What can I do ? When using ThinLVM a VM initially allocates the same amount of data as the guest OS. For example, if you install plain Windows10 in a VM, initially it will occupy only 20GB of disk space, despite the fact that its virtual disk is 200GB. Of course, as the demand for data within guest OS grows, that initial 20GB will eventually become 30,50,100 until it maxes out the 200GB limit. Once it reaches the 200GB limit, that virtual disk effectively becomes a ThickLVM volume. This is one of the reasons why you should make sure to constantly monitor the disk space consumption when using ThinLVM, especially if you are over provisioning (i.e Using 6 VM x 200GB each = 1.2 TB, where the actual physical space is only 1TB). Failing to do so, could lead in irreversible data corruption. Use the normal LVM cli commands to monitor its status, space utilisation etc (lsblk is not the right tool for that). I suggest you reading the Proxmox Admin guide so you familiarise yourself with its concepts. In particular chapter 7, where its storage model is described... https://pve.proxmox.com/pve-docs/pve-admin-guide.pdf One last thing, normally the guest OS won't release the occupied disk space on the host when you delete data within it. For example, let's say that you have a VM which is using 30GB of data and then you delete 10GB worth of data. That 10GB of space won't be released from the VM ThinLV volume, unless you use virtio-scsi as storage controller for the VM and the discard option is enabled. More on LVM: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/index G. From mark at openvs.co.uk Thu Mar 5 23:19:39 2020 From: mark at openvs.co.uk (Mark Adams) Date: Thu, 5 Mar 2020 22:19:39 +0000 Subject: [PVE-User] Creating VM with thin provisioning (please help me understand it). In-Reply-To: References: Message-ID: Gianni provided a great reply (please do go and read the admin documentation), but let me just follow up to one of your points below. On Thu, 5 Mar 2020, 19:46 Leandro Roggerone, wrote: > Dear Mark, you said: > "Then every virtual disk that is created on that storage type is thin > provisioned." > Ok ... good to hear that. > > So ... Lets suppose I created 5 VMs about 200Gb each on a 1TB space. > Im using less than 50% on each VM and now I need to add a 6th VM ? Even > Im using thin storage , I will have no free space. > If you have configured everything correctly, then in your 50% scenario you will have used 500G of your 1T. You should have space for more VMs but like Gianni said make sure you keep a close eye on utilisation, as running out of space will be very bad for you. Also note that you can't retrospectively enable thin provisioning (as a general rule... there are some manual exceptions). This means that if you created these 5 vms before you ticked that thin provisioning box, they are thick provisioned and not thin. Which from what you say about being out of space with 5 x 200G disks sounds like what you did. Regards, Mark What can I do ? > a) Should assign minimun capacity during creating time and enlarge later if > needed ? > b ) Should I look for unused space among existing VMs and reduce the disk > ? > Please let me know what is your criteria to assign storage to new VM. > Leandro. > > > > > El jue., 5 mar. 2020 a las 13:44, Mark Adams via pve-user (< > pve-user at pve.proxmox.com>) escribi?: > > > > > > > > > ---------- Forwarded message ---------- > > From: Mark Adams > > To: PVE User List > > Cc: > > Bcc: > > Date: Thu, 5 Mar 2020 16:43:08 +0000 > > Subject: Re: [PVE-User] Creating VM with thin provisioning (please help > me > > understand it). > > Thin provisioning is set on the storage, it is a checkbox and of course > it > > has to be a storage type than can be thin provisioned (ie lvmthin, zfs, > > ceph etc). > > > > Then every virtual disk that is created on that storage type is thin > > provisioned. > > > > Regards, > > Mark > > > > On Thu, 5 Mar 2020, 16:28 Leandro Roggerone, > > wrote: > > > > > Hi guys , here i am trying to get used to pve. (I came from VM ware > > EXSi). > > > Im used to use "thin provisioning" option in vmware. > > > It allowed me to set maximum drive space allocation but vm will use > > > minimum and dynamically grow as guest OS needs > > > . > > > How should I accomplish same on pve ? > > > I created a VM and set 200GB at the storage capacity but can not set > any > > > flag about thin provisioning. > > > Now , If I list lslbl , it shows that created VM is taking the 200GB. > > > root at pve:~# lsblk > > > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > > > sda 8:0 0 5.5T 0 disk > > > ??sda1 8:1 0 1007K 0 part > > > ??sda2 8:2 0 512M 0 part > > > ??sda3 8:3 0 3.9T 0 part > > > ??pve-swap 253:0 0 8G 0 lvm [SWAP] > > > ??pve-root 253:1 0 1.2T 0 lvm / > > > ??pve-data_tmeta 253:2 0 15.8G 0 lvm > > > ? ??pve-data-tpool 253:4 0 1.9T 0 lvm > > > ? ??pve-data 253:5 0 1.9T 0 lvm > > > ? ??pve-vm--100--disk--0 253:6 0 200G 0 lvm > > > ??pve-data_tdata 253:3 0 1.9T 0 lvm > > > ??pve-data-tpool 253:4 0 1.9T 0 lvm > > > ??pve-data 253:5 0 1.9T 0 lvm > > > ??pve-vm--100--disk--0 253:6 0 200G 0 lvm > > > sr0 11:0 1 1024M 0 rom > > > > > > > > > I can see from gui , that is is possible to expand disk. > > > So ... perhaps I should assign a little storage at beginning and then > > > manually expand if needed. > > > Please share your idea about this. > > > I need to begin creating some VMs and work on them , I dont want to > > create > > > in a wrong way and have to destroy later. > > > > > > < > > > > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > > > Libre > > > de virus. www.avast.com > > > < > > > > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > > _______________________________________________ > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > > > > > ---------- Forwarded message ---------- > > From: Mark Adams via pve-user > > To: PVE User List > > Cc: Mark Adams > > Bcc: > > Date: Thu, 5 Mar 2020 16:43:08 +0000 > > Subject: Re: [PVE-User] Creating VM with thin provisioning (please help > me > > understand it). > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From atilav at lightspeed.ca Thu Mar 5 23:22:33 2020 From: atilav at lightspeed.ca (Atila Vasconcelos) Date: Thu, 5 Mar 2020 14:22:33 -0800 Subject: [PVE-User] Creating VM with thin provisioning (please help me understand it). In-Reply-To: References: Message-ID: <058d5c6d-32a7-5083-d723-7602b12f3ca1@lightspeed.ca> WOW!!!!! I was searching for this piece of information for a long long time!!! Thank you Gianni, you changed my life! For real! Thanks a lot! ABV On 2020-03-05 1:32 p.m., Gianni Milo wrote: > > One last thing, normally the guest OS won't release the occupied disk space > on the host when you delete data within it. For example, let's say that you > have a VM which is using 30GB of data and then you delete 10GB worth of > data. That 10GB of space won't be released from the VM ThinLV volume, > unless you use virtio-scsi as storage controller for the VM and the discard > option is enabled. > > More on LVM: > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/index > > > G. > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From mark at openvs.co.uk Thu Mar 5 23:27:46 2020 From: mark at openvs.co.uk (Mark Adams) Date: Thu, 5 Mar 2020 22:27:46 +0000 Subject: [PVE-User] Creating VM with thin provisioning (please help me understand it). In-Reply-To: <058d5c6d-32a7-5083-d723-7602b12f3ca1@lightspeed.ca> References: <058d5c6d-32a7-5083-d723-7602b12f3ca1@lightspeed.ca> Message-ID: Atila - just to follow up on Giannis discard notes, depending on what OS and filesystems you use inside of your VMs, you may need to run fstrim, mount with different options, or run specific commands (ie zpool trim for zfs) to get it all working correctly. Regards, Mark On Thu, 5 Mar 2020, 22:22 Atila Vasconcelos, wrote: > WOW!!!!! > > I was searching for this piece of information for a long long time!!! > > Thank you Gianni, you changed my life! For real! Thanks a lot! > > > ABV > > On 2020-03-05 1:32 p.m., Gianni Milo wrote: > > > > One last thing, normally the guest OS won't release the occupied disk > space > > on the host when you delete data within it. For example, let's say that > you > > have a VM which is using 30GB of data and then you delete 10GB worth of > > data. That 10GB of space won't be released from the VM ThinLV volume, > > unless you use virtio-scsi as storage controller for the VM and the > discard > > option is enabled. > > > > More on LVM: > > > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/index > > > > > > G. > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From leandro at tecnetmza.com.ar Fri Mar 6 14:19:44 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Fri, 6 Mar 2020 10:19:44 -0300 Subject: [PVE-User] Creating VM with thin provisioning (please help me understand it). In-Reply-To: References: Message-ID: Gianni , thans for your feedbak , please take a look at my comments bellow: When using ThinLVM a VM initially allocates the same amount of data as the guest OS. For example, if you install plain Windows10 in a VM, initially it will occupy only 20GB of disk space, despite the fact that its virtual disk is 200GB. Of course, as the demand for data within guest OS grows, that initial 20GB will eventually become 30,50,100 until it maxes out the 200GB limit. Once it reaches the 200GB limit, that virtual disk effectively becomes a ThickLVM volume. Dont understand what happens here ... Sorry to push about this subject , but in my case I need to mount a netflow server. Storage needs are very variable here. Some times exported data grows a lot and can reach 300Gb in a few weeks. Some times exported data is under 20 or 30 Gb for some customers for ever. How should I effectively assign storage to my VMs? What would you do in this cases ? This is one of the reasons why you should make sure to constantly monitor the disk space consumption when using ThinLVM, especially if you are over provisioning (i.e Using 6 VM x 200GB each = 1.2 TB, where the actual physical space is only 1TB). Failing to do so, could lead in irreversible data corruption. Use the normal LVM cli commands to monitor its status, space utilisation etc (lsblk is not the right tool for that). Can you provide a better tool ? Of course im reading admin guide now! Regards, Leandro. Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> El jue., 5 mar. 2020 a las 18:34, Gianni Milo () escribi?: > > > So ... Lets suppose I created 5 VMs about 200Gb each on a 1TB space. > > Im using less than 50% on each VM and now I need to add a 6th VM ? Even > > Im using thin storage , I will have no free space. > > What can I do ? > > > When using ThinLVM a VM initially allocates the same amount of data as the > guest OS. For example, if you install plain Windows10 in a VM, initially it > will occupy only 20GB of disk space, despite the fact that its virtual disk > is 200GB. Of course, as the demand for data within guest OS grows, that > initial 20GB will eventually become 30,50,100 until it maxes out the 200GB > limit. Once it reaches the 200GB limit, that virtual disk effectively > becomes a ThickLVM volume. This is one of the reasons why you should make > sure to constantly monitor the disk space consumption when using ThinLVM, > especially if you are over provisioning (i.e Using 6 VM x 200GB each = 1.2 > TB, where the actual physical space is only 1TB). Failing to do so, could > lead in irreversible data corruption. Use the normal LVM cli commands to > monitor its status, space utilisation etc (lsblk is not the right tool for > that). > > I suggest you reading the Proxmox Admin guide so you familiarise yourself > with its concepts. In particular chapter 7, where its storage model is > described... > > https://pve.proxmox.com/pve-docs/pve-admin-guide.pdf > > One last thing, normally the guest OS won't release the occupied disk space > on the host when you delete data within it. For example, let's say that you > have a VM which is using 30GB of data and then you delete 10GB worth of > data. That 10GB of space won't be released from the VM ThinLV volume, > unless you use virtio-scsi as storage controller for the VM and the discard > option is enabled. > > More on LVM: > > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/index > > > G. > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From mark at openvs.co.uk Fri Mar 6 14:37:02 2020 From: mark at openvs.co.uk (Mark Adams) Date: Fri, 6 Mar 2020 13:37:02 +0000 Subject: [PVE-User] Creating VM with thin provisioning (please help me understand it). In-Reply-To: References: Message-ID: REF: "Thin provisioning is set on the storage, it is a checkbox and of course it has to be a storage type than can be thin provisioned (ie lvmthin, zfs, ceph etc)." I have to correct myself on this sorry, it's been a long time since I used lvmthin.... This checkbox option is only available on zfs storage as I guess you have the option for both, where as lvmthin should always be thin? On Thu, 5 Mar 2020 at 16:43, Mark Adams wrote: > Thin provisioning is set on the storage, it is a checkbox and of course it > has to be a storage type than can be thin provisioned (ie lvmthin, zfs, > ceph etc). > > Then every virtual disk that is created on that storage type is thin > provisioned. > > Regards, > Mark > > On Thu, 5 Mar 2020, 16:28 Leandro Roggerone, > wrote: > >> Hi guys , here i am trying to get used to pve. (I came from VM ware EXSi). >> Im used to use "thin provisioning" option in vmware. >> It allowed me to set maximum drive space allocation but vm will use >> minimum and dynamically grow as guest OS needs >> . >> How should I accomplish same on pve ? >> I created a VM and set 200GB at the storage capacity but can not set any >> flag about thin provisioning. >> Now , If I list lslbl , it shows that created VM is taking the 200GB. >> root at pve:~# lsblk >> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT >> sda 8:0 0 5.5T 0 disk >> ??sda1 8:1 0 1007K 0 part >> ??sda2 8:2 0 512M 0 part >> ??sda3 8:3 0 3.9T 0 part >> ??pve-swap 253:0 0 8G 0 lvm [SWAP] >> ??pve-root 253:1 0 1.2T 0 lvm / >> ??pve-data_tmeta 253:2 0 15.8G 0 lvm >> ? ??pve-data-tpool 253:4 0 1.9T 0 lvm >> ? ??pve-data 253:5 0 1.9T 0 lvm >> ? ??pve-vm--100--disk--0 253:6 0 200G 0 lvm >> ??pve-data_tdata 253:3 0 1.9T 0 lvm >> ??pve-data-tpool 253:4 0 1.9T 0 lvm >> ??pve-data 253:5 0 1.9T 0 lvm >> ??pve-vm--100--disk--0 253:6 0 200G 0 lvm >> sr0 11:0 1 1024M 0 rom >> >> >> I can see from gui , that is is possible to expand disk. >> So ... perhaps I should assign a little storage at beginning and then >> manually expand if needed. >> Please share your idea about this. >> I need to begin creating some VMs and work on them , I dont want to create >> in a wrong way and have to destroy later. >> >> < >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >> > >> Libre >> de virus. www.avast.com >> < >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >> > >> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From proxmox at elchaka.de Sat Mar 7 02:06:19 2020 From: proxmox at elchaka.de (proxmox at elchaka.de) Date: Sat, 07 Mar 2020 02:06:19 +0100 Subject: [PVE-User] How may Container are you running? Message-ID: <162F336C-9F86-42A0-B95F-0E157C4A4051@elchaka.de> Hello people, Short question: How many lxc container are you running (?) On how many pve5/6 nodes (?) With ceph as storage? We Plan to you use 7000 container with 3-6 pve6 nodes on ceph Luminous (30 ssd osds) and want to if this should work without any issues... Thank you for any Inputs Mehmet From gaio at sv.lnf.it Mon Mar 9 11:34:32 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Mon, 9 Mar 2020 11:34:32 +0100 Subject: [PVE-User] Unprivileged container and fstrim... Message-ID: <20200309103432.GE3025@sv.lnf.it> I was abused to do a 'fstrim /' weekly run on my privileged containers, and seems that the command was effective: root at vdcsv1:~# fstrim -v / /: 18,6 GiB (19946131456 bytes) trimmed but after convert many of them to unprivileged ones: root at vmw:~# fstrim -v / fstrim: /: FITRIM ioctl failed: Operazione non permessa Clearly, underlying storage is 'trimmable'. Two question. 1) trim on container generally works, or simply on privileged ones does not throw errors but do nothing? ;-) Container space change slowly, so could be that trim was simply ignored... 2) there's another way to 'trim' container disk space? Eg, from the host? Thanks. -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From pve at junkyard.4t2.com Mon Mar 9 11:44:44 2020 From: pve at junkyard.4t2.com (Tom Weber) Date: Mon, 09 Mar 2020 11:44:44 +0100 Subject: [PVE-User] Unprivileged container and fstrim... In-Reply-To: <20200309103432.GE3025@sv.lnf.it> References: <20200309103432.GE3025@sv.lnf.it> Message-ID: Am Montag, den 09.03.2020, 11:34 +0100 schrieb Marco Gaiarin: > I was abused to do a 'fstrim /' weekly run on my privileged > containers, > and seems that the command was effective: > > root at vdcsv1:~# fstrim -v / > /: 18,6 GiB (19946131456 bytes) trimmed > > but after convert many of them to unprivileged ones: > > root at vmw:~# fstrim -v / > fstrim: /: FITRIM ioctl failed: Operazione non permessa > > Clearly, underlying storage is 'trimmable'. > > > Two question. > > 1) trim on container generally works, or simply on privileged ones > does > not throw errors but do nothing? ;-) > Container space change slowly, so could be that trim was simply > ignored... > > 2) there's another way to 'trim' container disk space? Eg, from the > host? pct fstrim on the host? Tom From gaio at sv.lnf.it Mon Mar 9 12:03:02 2020 From: gaio at sv.lnf.it (Marco Gaiarin) Date: Mon, 9 Mar 2020 12:03:02 +0100 Subject: [PVE-User] Unprivileged container and fstrim... In-Reply-To: References: <20200309103432.GE3025@sv.lnf.it> Message-ID: <20200309110302.GF3025@sv.lnf.it> Mandi! Tom Weber In chel di` si favelave... > pct fstrim > on the host? AAARRGHHH! Totaly missed that. Sorry to the list, and thanks. ;-) -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ Polo FVG - Via della Bont?, 7 - 33078 - San Vito al Tagliamento (PN) marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) From leandro at tecnetmza.com.ar Mon Mar 9 12:53:29 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Mon, 9 Mar 2020 08:53:29 -0300 Subject: [PVE-User] Creating VM with thin provisioning (ESXi fashion) Message-ID: Sorry to compare ... between systems , I just want to get to the same function: When you create a VM on vmware you assign a max storage , for example 400Gb. Then you can see from vm dashboard that this VM is using 194Gb. But ,if I look inside the VM , you will see: [root at ftp ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 * 400G 0 disk* ??sda1 8:1 0 1G 0 part /boot ??sda2 8:2 0 398.9G 0 part ??centos-root 253:0 0 395G 0 lvm / ??centos-swap 253:1 0 3.9G 0 lvm [SWAP] sr0 11:0 1 918M 0 rom The VM "believes" it has a 400Gb drive, and can continue automatically growing up to those 400 GB but it is currently using only 194Gb. I would like to work same way on my PVE environment if possible .. or what is best approach. Leandro. Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> From gregor at aeppelbroe.de Mon Mar 9 13:07:20 2020 From: gregor at aeppelbroe.de (Gregor Burck) Date: Mon, 09 Mar 2020 13:07:20 +0100 Subject: [PVE-User] SSD als osd neu initialisieren/wieder aufnehmen Message-ID: <105591386.nniJfEyVGO@ph-pc014.peiker-holding.de> Moin, ich teste verschiedenes mit CEPH. Dabei habe ich eine SSD ?ber Destroy entfernt. Wie kann ich die SSD wieder in den Cluster aufnehmen? Beim Hinzuf?gen wird mir keine verf?gbare SSD angezeigt? Unter Disks wurde die SSD auch noch als osd.4 bezeichnet. Darufhin habe ich auf der Konsole per dd den MBR ?berschrieben. Jetzt taucht sie als Usage: Device Mapper auf und ich kann sie trotzdem nicht neu initialisieren. Es ist auch noch ein systemd service vorhanden: ceph-osd at 4.service Irgendetwas ist da durcheinander gekommen. Gr??e Greogr From leandro at tecnetmza.com.ar Mon Mar 9 13:11:49 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Mon, 9 Mar 2020 09:11:49 -0300 Subject: [PVE-User] qcow2 vs raw format Message-ID: Hi guys, after install pve , would like to create my first VM. I noticed that only available format is raw. Question is: Is qcow2 deprecated ? What are differences between them ? (I already google it but is not 100% clear). This is my storage configuration is. root at pve:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backup lvmthin: local-lvm thinpool data vgname pve content rootdir,images Regards. Leandro. Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> From elacunza at binovo.es Mon Mar 9 13:15:54 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Mon, 9 Mar 2020 13:15:54 +0100 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: References: Message-ID: Hola Leandro, El 9/3/20 a las 13:11, Leandro Roggerone escribi?: > Hi guys, after install pve , would like to create my first VM. > I noticed that only available format is raw. > Question is: > Is qcow2 deprecated ? > What are differences between them ? (I already google it but is not 100% > clear). > > This is my storage configuration is. > root at pve:~# cat /etc/pve/storage.cfg > dir: local > path /var/lib/vz > content iso,vztmpl,backup > > lvmthin: local-lvm > thinpool data > vgname pve > content rootdir,images > You need a file-type storage (i.e. dir type) to be able to use qcow2 format. Default local-lvm is lvmthin and only logical volumes in raw format can be created (no raw files are created in fact). Cheers Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From leandro at tecnetmza.com.ar Mon Mar 9 13:21:31 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Mon, 9 Mar 2020 09:21:31 -0300 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: References: Message-ID: Thankyou Eneko , Are features / performance difference between those formats ? Regards, Leandro. Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza () escribi?: > Hola Leandro, > > El 9/3/20 a las 13:11, Leandro Roggerone escribi?: > > Hi guys, after install pve , would like to create my first VM. > > I noticed that only available format is raw. > > Question is: > > Is qcow2 deprecated ? > > What are differences between them ? (I already google it but is not 100% > > clear). > > > > This is my storage configuration is. > > root at pve:~# cat /etc/pve/storage.cfg > > dir: local > > path /var/lib/vz > > content iso,vztmpl,backup > > > > lvmthin: local-lvm > > thinpool data > > vgname pve > > content rootdir,images > > > You need a file-type storage (i.e. dir type) to be able to use qcow2 > format. Default local-lvm is lvmthin and only logical volumes in raw > format can be created (no raw files are created in fact). > > Cheers > Eneko > > -- > Zuzendari Teknikoa / Director T?cnico > Binovo IT Human Project, S.L. > Telf. 943569206 > Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > www.binovo.es > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gilberto.nunes32 at gmail.com Mon Mar 9 13:25:49 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Mon, 9 Mar 2020 09:25:49 -0300 Subject: [PVE-User] VZdump: No such disk, but the disk is there! In-Reply-To: References: <0eaec1f1-f6e9-1c58-2595-dced38bf9932@binovo.es> <754b8bc0-419b-d9e5-0c11-400b25e1d916@lightspeed.ca> Message-ID: Solved after upgrade to PVE6. --- Gilberto Nunes Ferreira Em qui., 20 de fev. de 2020 ?s 12:26, Gilberto Nunes < gilberto.nunes32 at gmail.com> escreveu: > Any advice? > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > > > Em qua., 19 de fev. de 2020 ?s 11:01, Gilberto Nunes < > gilberto.nunes32 at gmail.com> escreveu: > >> HI there >> >> I change the bwlimit to 100000 inside /etc/vzdump and vzdump works >> normally for a couple of days and it's make happy. >> Now, I have the error again! No logs, no explanation! Just error pure and >> simple: >> >> 110: 2020-02-18 22:18:06 INFO: Starting Backup of VM 110 (qemu) >> 110: 2020-02-18 22:18:06 INFO: status = running >> 110: 2020-02-18 22:18:07 INFO: update VM 110: -lock backup >> 110: 2020-02-18 22:18:07 INFO: VM Name: cliente-V-110-IP-163 >> 110: 2020-02-18 22:18:07 INFO: include disk 'scsi0' 'local-lvm:vm-110-disk-0' 100G >> 110: 2020-02-18 22:18:57 ERROR: Backup of VM 110 failed - no such volume 'local-lvm:vm-110-disk-0' >> >> 112: 2020-02-18 22:19:00 INFO: Starting Backup of VM 112 (qemu) >> 112: 2020-02-18 22:19:00 INFO: status = running >> 112: 2020-02-18 22:19:01 INFO: update VM 112: -lock backup >> 112: 2020-02-18 22:19:01 INFO: VM Name: cliente-V-112-IP-165 >> 112: 2020-02-18 22:19:01 INFO: include disk 'scsi0' 'local-lvm:vm-112-disk-0' 120G >> 112: 2020-02-18 22:19:31 ERROR: Backup of VM 112 failed - no such volume 'local-lvm:vm-112-disk-0' >> >> 116: 2020-02-18 22:19:31 INFO: Starting Backup of VM 116 (qemu) >> 116: 2020-02-18 22:19:31 INFO: status = running >> 116: 2020-02-18 22:19:32 INFO: update VM 116: -lock backup >> 116: 2020-02-18 22:19:32 INFO: VM Name: cliente-V-IP-162 >> 116: 2020-02-18 22:19:32 INFO: include disk 'scsi0' 'local-lvm:vm-116-disk-0' 100G >> 116: 2020-02-18 22:20:05 ERROR: Backup of VM 116 failed - no such volume 'local-lvm:vm-116-disk-0' >> >> >> >> --- >> Gilberto Nunes Ferreira >> >> (47) 3025-5907 >> (47) 99676-7530 - Whatsapp / Telegram >> >> Skype: gilberto.nunes36 >> >> >> >> >> >> Em sex., 14 de fev. de 2020 ?s 14:22, Gianni Milo < >> gianni.milo22 at gmail.com> escreveu: >> >>> If it's happening randomly, my best guess would be that it might be >>> related >>> to high i/o during the time frame that the backup takes place. >>> Have you tried creating multiple backup schedules which will take place >>> at >>> different times ? Setting backup bandwidth limits might also help. >>> Check the PVE administration guide for more details on this. You could >>> check for any clues in syslog during the time that the failed backup >>> takes >>> place as well. >>> >>> G. >>> >>> On Fri, 14 Feb 2020 at 14:35, Gilberto Nunes >> > >>> wrote: >>> >>> > HI guys >>> > >>> > Some problem but with two different vms... >>> > I also update Proxmox still in 5.x series, but no changes... Now this >>> > problem ocurrs twice, one night after other... >>> > I am very concerned about it! >>> > Please, Proxmox staff, is there something I can do to solve this issue? >>> > Anybody alread do a bugzilla??? >>> > >>> > Thanks >>> > --- >>> > Gilberto Nunes Ferreira >>> > >>> > (47) 3025-5907 >>> > (47) 99676-7530 - Whatsapp / Telegram >>> > >>> > Skype: gilberto.nunes36 >>> > >>> > >>> > >>> > >>> > >>> > Em qui., 13 de fev. de 2020 ?s 19:53, Atila Vasconcelos < >>> > atilav at lightspeed.ca> escreveu: >>> > >>> > > Hi, >>> > > >>> > > I had the same problem in the past and it repeats once a while.... >>> its >>> > > very random; I could not find any way to reproduce it. >>> > > >>> > > But as it happens... it will go away. >>> > > >>> > > When you are almost forgetting about it, it will come again ;) >>> > > >>> > > I just learned to ignore it (and do manually the backup when it >>> fails) >>> > > >>> > > I see in proxmox 6.x it is less frequent (but still happening once a >>> > > while). >>> > > >>> > > >>> > > ABV >>> > > >>> > > >>> > > On 2020-02-13 4:42 a.m., Gilberto Nunes wrote: >>> > > > Yeah! Me too... This problem is pretty random... Let see next week! >>> > > > --- >>> > > > Gilberto Nunes Ferreira >>> > > > >>> > > > (47) 3025-5907 >>> > > > (47) 99676-7530 - Whatsapp / Telegram >>> > > > >>> > > > Skype: gilberto.nunes36 >>> > > > >>> > > > >>> > > > >>> > > > >>> > > > >>> > > > Em qui., 13 de fev. de 2020 ?s 09:29, Eneko Lacunza < >>> > elacunza at binovo.es> >>> > > > escreveu: >>> > > > >>> > > >> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of >>> > ideas >>> > > >> now, sorry!!! ;) >>> > > >> >>> > > >> El 13/2/20 a las 13:24, Gilberto Nunes escribi?: >>> > > >>> I can assure you... the disk is there! >>> > > >>> >>> > > >>> pvesm list local-lvm >>> > > >>> local-lvm:vm-101-disk-0 raw 53687091200 101 >>> > > >>> local-lvm:vm-102-disk-0 raw 536870912000 102 >>> > > >>> local-lvm:vm-103-disk-0 raw 322122547200 103 >>> > > >>> local-lvm:vm-104-disk-0 raw 214748364800 104 >>> > > >>> local-lvm:vm-104-state-LUKPLAS raw 17704157184 104 >>> > > >>> local-lvm:vm-105-disk-0 raw 751619276800 105 >>> > > >>> local-lvm:vm-106-disk-0 raw 161061273600 106 >>> > > >>> local-lvm:vm-107-disk-0 raw 536870912000 107 >>> > > >>> local-lvm:vm-108-disk-0 raw 214748364800 108 >>> > > >>> local-lvm:vm-109-disk-0 raw 107374182400 109 >>> > > >>> local-lvm:vm-110-disk-0 raw 107374182400 110 >>> > > >>> local-lvm:vm-111-disk-0 raw 107374182400 111 >>> > > >>> local-lvm:vm-112-disk-0 raw 128849018880 112 >>> > > >>> local-lvm:vm-113-disk-0 raw 53687091200 113 >>> > > >>> local-lvm:vm-113-state-antes_balloon raw 17704157184 113 >>> > > >>> local-lvm:vm-114-disk-0 raw 128849018880 114 >>> > > >>> local-lvm:vm-115-disk-0 raw 107374182400 115 >>> > > >>> local-lvm:vm-115-disk-1 raw 53687091200 115 >>> > > >>> local-lvm:vm-116-disk-0 raw 107374182400 116 >>> > > >>> local-lvm:vm-117-disk-0 raw 107374182400 117 >>> > > >>> local-lvm:vm-118-disk-0 raw 107374182400 118 >>> > > >>> local-lvm:vm-119-disk-0 raw 26843545600 119 >>> > > >>> local-lvm:vm-121-disk-0 raw 107374182400 121 >>> > > >>> local-lvm:vm-122-disk-0 raw 107374182400 122 >>> > > >>> local-lvm:vm-123-disk-0 raw 161061273600 123 >>> > > >>> local-lvm:vm-124-disk-0 raw 107374182400 124 >>> > > >>> local-lvm:vm-125-disk-0 raw 53687091200 125 >>> > > >>> local-lvm:vm-126-disk-0 raw 32212254720 126 >>> > > >>> local-lvm:vm-127-disk-0 raw 53687091200 127 >>> > > >>> local-lvm:vm-129-disk-0 raw 21474836480 129 >>> > > >>> >>> > > >>> ls -l /dev/pve/vm-110-disk-0 >>> > > >>> lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 -> >>> > > ../dm-15 >>> > > >>> >>> > > >>> >>> > > >>> --- >>> > > >>> Gilberto Nunes Ferreira >>> > > >>> >>> > > >>> (47) 3025-5907 >>> > > >>> (47) 99676-7530 - Whatsapp / Telegram >>> > > >>> >>> > > >>> Skype: gilberto.nunes36 >>> > > >>> >>> > > >>> >>> > > >>> >>> > > >>> >>> > > >>> >>> > > >>> Em qui., 13 de fev. de 2020 ?s 09:19, Eneko Lacunza < >>> > > elacunza at binovo.es> >>> > > >>> escreveu: >>> > > >>> >>> > > >>>> What about: >>> > > >>>> >>> > > >>>> pvesm list local-lvm >>> > > >>>> ls -l /dev/pve/vm-110-disk-0 >>> > > >>>> >>> > > >>>> El 13/2/20 a las 12:40, Gilberto Nunes escribi?: >>> > > >>>>> Quite strange to say the least >>> > > >>>>> >>> > > >>>>> >>> > > >>>>> ls /dev/pve/* >>> > > >>>>> /dev/pve/root /dev/pve/vm-109-disk-0 >>> > > >>>>> /dev/pve/vm-118-disk-0 >>> > > >>>>> /dev/pve/swap /dev/pve/vm-110-disk-0 >>> > > >>>>> /dev/pve/vm-119-disk-0 >>> > > >>>>> /dev/pve/vm-101-disk-0 /dev/pve/vm-111-disk-0 >>> > > >>>>> /dev/pve/vm-121-disk-0 >>> > > >>>>> /dev/pve/vm-102-disk-0 /dev/pve/vm-112-disk-0 >>> > > >>>>> /dev/pve/vm-122-disk-0 >>> > > >>>>> /dev/pve/vm-103-disk-0 /dev/pve/vm-113-disk-0 >>> > > >>>>> /dev/pve/vm-123-disk-0 >>> > > >>>>> /dev/pve/vm-104-disk-0 >>> /dev/pve/vm-113-state-antes_balloon >>> > > >>>>> /dev/pve/vm-124-disk-0 >>> > > >>>>> /dev/pve/vm-104-state-LUKPLAS /dev/pve/vm-114-disk-0 >>> > > >>>>> /dev/pve/vm-125-disk-0 >>> > > >>>>> /dev/pve/vm-105-disk-0 /dev/pve/vm-115-disk-0 >>> > > >>>>> /dev/pve/vm-126-disk-0 >>> > > >>>>> /dev/pve/vm-106-disk-0 /dev/pve/vm-115-disk-1 >>> > > >>>>> /dev/pve/vm-127-disk-0 >>> > > >>>>> /dev/pve/vm-107-disk-0 /dev/pve/vm-116-disk-0 >>> > > >>>>> /dev/pve/vm-129-disk-0 >>> > > >>>>> /dev/pve/vm-108-disk-0 /dev/pve/vm-117-disk-0 >>> > > >>>>> >>> > > >>>>> ls /dev/mapper/ >>> > > >>>>> control pve-vm--104--state--LUKPLAS >>> > > >>>>> pve-vm--115--disk--1 >>> > > >>>>> iscsi-backup pve-vm--105--disk--0 >>> > > >>>>> pve-vm--116--disk--0 >>> > > >>>>> mpatha pve-vm--106--disk--0 >>> > > >>>>> pve-vm--117--disk--0 >>> > > >>>>> pve-data pve-vm--107--disk--0 >>> > > >>>>> pve-vm--118--disk--0 >>> > > >>>>> pve-data_tdata pve-vm--108--disk--0 >>> > > >>>>> pve-vm--119--disk--0 >>> > > >>>>> pve-data_tmeta pve-vm--109--disk--0 >>> > > >>>>> pve-vm--121--disk--0 >>> > > >>>>> pve-data-tpool pve-vm--110--disk--0 >>> > > >>>>> pve-vm--122--disk--0 >>> > > >>>>> pve-root pve-vm--111--disk--0 >>> > > >>>>> pve-vm--123--disk--0 >>> > > >>>>> pve-swap pve-vm--112--disk--0 >>> > > >>>>> pve-vm--124--disk--0 >>> > > >>>>> pve-vm--101--disk--0 pve-vm--113--disk--0 >>> > > >>>>> pve-vm--125--disk--0 >>> > > >>>>> pve-vm--102--disk--0 pve-vm--113--state--antes_balloon >>> > > >>>>> pve-vm--126--disk--0 >>> > > >>>>> pve-vm--103--disk--0 pve-vm--114--disk--0 >>> > > >>>>> pve-vm--127--disk--0 >>> > > >>>>> pve-vm--104--disk--0 pve-vm--115--disk--0 >>> > > >>>>> pve-vm--129--disk--0 >>> > > >>>>> >>> > > >>>>> >>> > > >>>>> --- >>> > > >>>>> Gilberto Nunes Ferreira >>> > > >>>>> >>> > > >>>>> (47) 3025-5907 >>> > > >>>>> (47) 99676-7530 - Whatsapp / Telegram >>> > > >>>>> >>> > > >>>>> Skype: gilberto.nunes36 >>> > > >>>>> >>> > > >>>>> >>> > > >>>>> >>> > > >>>>> >>> > > >>>>> >>> > > >>>>> Em qui., 13 de fev. de 2020 ?s 08:38, Eneko Lacunza < >>> > > >> elacunza at binovo.es> >>> > > >>>>> escreveu: >>> > > >>>>> >>> > > >>>>>> It's quite strange, what about "ls /dev/pve/*"? >>> > > >>>>>> >>> > > >>>>>> El 13/2/20 a las 12:18, Gilberto Nunes escribi?: >>> > > >>>>>>> n: Thu Feb 13 07:06:19 2020 >>> > > >>>>>>> a2web:~# lvs >>> > > >>>>>>> LV VG Attr LSize >>> > > Pool >>> > > >>>> Origin >>> > > >>>>>>> Data% Meta% Move Log Cpy%Sync Convert >>> > > >>>>>>> backup iscsi -wi-ao---- >>> 1.61t >>> > > >>>>>>> >>> > > >>>>>>> data pve twi-aotz-- >>> 3.34t >>> > > >>>>>>> 88.21 9.53 >>> > > >>>>>>> root pve -wi-ao---- >>> 96.00g >>> > > >>>>>>> >>> > > >>>>>>> snap_vm-104-disk-0_LUKPLAS pve Vri---tz-k >>> 200.00g >>> > > data >>> > > >>>>>>> vm-104-disk-0 >>> > > >>>>>>> snap_vm-113-disk-0_antes_balloon pve Vri---tz-k >>> 50.00g >>> > > data >>> > > >>>>>>> vm-113-disk-0 >>> > > >>>>>>> swap pve -wi-ao---- >>> 8.00g >>> > > >>>>>>> >>> > > >>>>>>> vm-101-disk-0 pve Vwi-aotz-- >>> 50.00g >>> > > data >>> > > >>>>>>> 24.17 >>> > > >>>>>>> vm-102-disk-0 pve Vwi-aotz-- >>> 500.00g >>> > > data >>> > > >>>>>>> 65.65 >>> > > >>>>>>> vm-103-disk-0 pve Vwi-aotz-- >>> 300.00g >>> > > data >>> > > >>>>>>> 37.28 >>> > > >>>>>>> vm-104-disk-0 pve Vwi-aotz-- >>> 200.00g >>> > > data >>> > > >>>>>>> 17.87 >>> > > >>>>>>> vm-104-state-LUKPLAS pve Vwi-a-tz-- >>> 16.49g >>> > > data >>> > > >>>>>>> 35.53 >>> > > >>>>>>> vm-105-disk-0 pve Vwi-aotz-- >>> 700.00g >>> > > data >>> > > >>>>>>> 90.18 >>> > > >>>>>>> vm-106-disk-0 pve Vwi-aotz-- >>> 150.00g >>> > > data >>> > > >>>>>>> 93.55 >>> > > >>>>>>> vm-107-disk-0 pve Vwi-aotz-- >>> 500.00g >>> > > data >>> > > >>>>>>> 98.20 >>> > > >>>>>>> vm-108-disk-0 pve Vwi-aotz-- >>> 200.00g >>> > > data >>> > > >>>>>>> 98.02 >>> > > >>>>>>> vm-109-disk-0 pve Vwi-aotz-- >>> 100.00g >>> > > data >>> > > >>>>>>> 93.68 >>> > > >>>>>>> vm-110-disk-0 pve Vwi-aotz-- >>> 100.00g >>> > > data >>> > > >>>>>>> 34.55 >>> > > >>>>>>> vm-111-disk-0 pve Vwi-aotz-- >>> 100.00g >>> > > data >>> > > >>>>>>> 79.03 >>> > > >>>>>>> vm-112-disk-0 pve Vwi-aotz-- >>> 120.00g >>> > > data >>> > > >>>>>>> 93.78 >>> > > >>>>>>> vm-113-disk-0 pve Vwi-aotz-- >>> 50.00g >>> > > data >>> > > >>>>>>> 65.42 >>> > > >>>>>>> vm-113-state-antes_balloon pve Vwi-a-tz-- >>> 16.49g >>> > > data >>> > > >>>>>>> 43.64 >>> > > >>>>>>> vm-114-disk-0 pve Vwi-aotz-- >>> 120.00g >>> > > data >>> > > >>>>>>> 100.00 >>> > > >>>>>>> vm-115-disk-0 pve Vwi-a-tz-- >>> 100.00g >>> > > data >>> > > >>>>>>> 70.28 >>> > > >>>>>>> vm-115-disk-1 pve Vwi-a-tz-- >>> 50.00g >>> > > data >>> > > >>>>>>> 0.00 >>> > > >>>>>>> vm-116-disk-0 pve Vwi-aotz-- >>> 100.00g >>> > > data >>> > > >>>>>>> 26.34 >>> > > >>>>>>> vm-117-disk-0 pve Vwi-aotz-- >>> 100.00g >>> > > data >>> > > >>>>>>> 100.00 >>> > > >>>>>>> vm-118-disk-0 pve Vwi-aotz-- >>> 100.00g >>> > > data >>> > > >>>>>>> 100.00 >>> > > >>>>>>> vm-119-disk-0 pve Vwi-aotz-- >>> 25.00g >>> > > data >>> > > >>>>>>> 18.42 >>> > > >>>>>>> vm-121-disk-0 pve Vwi-aotz-- >>> 100.00g >>> > > data >>> > > >>>>>>> 23.76 >>> > > >>>>>>> vm-122-disk-0 pve Vwi-aotz-- >>> 100.00g >>> > > data >>> > > >>>>>>> 100.00 >>> > > >>>>>>> vm-123-disk-0 pve Vwi-aotz-- >>> 150.00g >>> > > data >>> > > >>>>>>> 37.89 >>> > > >>>>>>> vm-124-disk-0 pve Vwi-aotz-- >>> 100.00g >>> > > data >>> > > >>>>>>> 30.73 >>> > > >>>>>>> vm-125-disk-0 pve Vwi-aotz-- >>> 50.00g >>> > > data >>> > > >>>>>>> 9.02 >>> > > >>>>>>> vm-126-disk-0 pve Vwi-aotz-- >>> 30.00g >>> > > data >>> > > >>>>>>> 99.72 >>> > > >>>>>>> vm-127-disk-0 pve Vwi-aotz-- >>> 50.00g >>> > > data >>> > > >>>>>>> 10.79 >>> > > >>>>>>> vm-129-disk-0 pve Vwi-aotz-- >>> 20.00g >>> > > data >>> > > >>>>>>> 45.04 >>> > > >>>>>>> >>> > > >>>>>>> cat /etc/pve/storage.cfg >>> > > >>>>>>> dir: local >>> > > >>>>>>> path /var/lib/vz >>> > > >>>>>>> content backup,iso,vztmpl >>> > > >>>>>>> >>> > > >>>>>>> lvmthin: local-lvm >>> > > >>>>>>> thinpool data >>> > > >>>>>>> vgname pve >>> > > >>>>>>> content rootdir,images >>> > > >>>>>>> >>> > > >>>>>>> iscsi: iscsi >>> > > >>>>>>> portal some-portal >>> > > >>>>>>> target some-target >>> > > >>>>>>> content images >>> > > >>>>>>> >>> > > >>>>>>> lvm: iscsi-lvm >>> > > >>>>>>> vgname iscsi >>> > > >>>>>>> base iscsi:0.0.0.scsi-mpatha >>> > > >>>>>>> content rootdir,images >>> > > >>>>>>> shared 1 >>> > > >>>>>>> >>> > > >>>>>>> dir: backup >>> > > >>>>>>> path /backup >>> > > >>>>>>> content images,rootdir,iso,backup >>> > > >>>>>>> maxfiles 3 >>> > > >>>>>>> shared 0 >>> > > >>>>>>> --- >>> > > >>>>>>> Gilberto Nunes Ferreira >>> > > >>>>>>> >>> > > >>>>>>> (47) 3025-5907 >>> > > >>>>>>> (47) 99676-7530 - Whatsapp / Telegram >>> > > >>>>>>> >>> > > >>>>>>> Skype: gilberto.nunes36 >>> > > >>>>>>> >>> > > >>>>>>> >>> > > >>>>>>> >>> > > >>>>>>> >>> > > >>>>>>> >>> > > >>>>>>> Em qui., 13 de fev. de 2020 ?s 08:11, Eneko Lacunza < >>> > > >>>> elacunza at binovo.es> >>> > > >>>>>>> escreveu: >>> > > >>>>>>> >>> > > >>>>>>>> Can you send the output for "lvs" and "cat >>> > /etc/pve/storage.cfg"? >>> > > >>>>>>>> >>> > > >>>>>>>> El 13/2/20 a las 11:13, Gilberto Nunes escribi?: >>> > > >>>>>>>>> HI all >>> > > >>>>>>>>> >>> > > >>>>>>>>> Still in trouble with this issue >>> > > >>>>>>>>> >>> > > >>>>>>>>> cat daemon.log | grep "Feb 12 22:10" >>> > > >>>>>>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE >>> > replication >>> > > >>>>>>>> runner... >>> > > >>>>>>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE >>> > replication >>> > > >>>>>> runner. >>> > > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup >>> of >>> > VM >>> > > >> 110 >>> > > >>>>>>>> (qemu) >>> > > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM >>> 110 >>> > > >> failed - >>> > > >>>>>> no >>> > > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' >>> > > >>>>>>>>> >>> > > >>>>>>>>> syslog >>> > > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup >>> of >>> > VM >>> > > >> 110 >>> > > >>>>>>>> (qemu) >>> > > >>>>>>>>> Feb 12 22:10:06 a2web qm[18860]: update VM 110: >>> > -lock >>> > > >>>>>> backup >>> > > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM >>> 110 >>> > > >> failed - >>> > > >>>>>> no >>> > > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0' >>> > > >>>>>>>>> >>> > > >>>>>>>>> pveversion >>> > > >>>>>>>>> pve-manager/5.4-13/aee6f0ec (running kernel: >>> 4.15.18-12-pve) >>> > > >>>>>>>>> >>> > > >>>>>>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) >>> > > >>>>>>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) >>> > > >>>>>>>>> pve-kernel-4.15: 5.4-12 >>> > > >>>>>>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52 >>> > > >>>>>>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36 >>> > > >>>>>>>>> corosync: 2.4.4-pve1 >>> > > >>>>>>>>> criu: 2.11.1-1~bpo90 >>> > > >>>>>>>>> glusterfs-client: 3.8.8-1 >>> > > >>>>>>>>> ksm-control-daemon: 1.2-2 >>> > > >>>>>>>>> libjs-extjs: 6.0.1-2 >>> > > >>>>>>>>> libpve-access-control: 5.1-12 >>> > > >>>>>>>>> libpve-apiclient-perl: 2.0-5 >>> > > >>>>>>>>> libpve-common-perl: 5.0-56 >>> > > >>>>>>>>> libpve-guest-common-perl: 2.0-20 >>> > > >>>>>>>>> libpve-http-server-perl: 2.0-14 >>> > > >>>>>>>>> libpve-storage-perl: 5.0-44 >>> > > >>>>>>>>> libqb0: 1.0.3-1~bpo9 >>> > > >>>>>>>>> lvm2: 2.02.168-pve6 >>> > > >>>>>>>>> lxc-pve: 3.1.0-7 >>> > > >>>>>>>>> lxcfs: 3.0.3-pve1 >>> > > >>>>>>>>> novnc-pve: 1.0.0-3 >>> > > >>>>>>>>> proxmox-widget-toolkit: 1.0-28 >>> > > >>>>>>>>> pve-cluster: 5.0-38 >>> > > >>>>>>>>> pve-container: 2.0-41 >>> > > >>>>>>>>> pve-docs: 5.4-2 >>> > > >>>>>>>>> pve-edk2-firmware: 1.20190312-1 >>> > > >>>>>>>>> pve-firewall: 3.0-22 >>> > > >>>>>>>>> pve-firmware: 2.0-7 >>> > > >>>>>>>>> pve-ha-manager: 2.0-9 >>> > > >>>>>>>>> pve-i18n: 1.1-4 >>> > > >>>>>>>>> pve-libspice-server1: 0.14.1-2 >>> > > >>>>>>>>> pve-qemu-kvm: 3.0.1-4 >>> > > >>>>>>>>> pve-xtermjs: 3.12.0-1 >>> > > >>>>>>>>> qemu-server: 5.0-55 >>> > > >>>>>>>>> smartmontools: 6.5+svn4324-1 >>> > > >>>>>>>>> spiceterm: 3.0-5 >>> > > >>>>>>>>> vncterm: 1.5-3 >>> > > >>>>>>>>> zfsutils-linux: 0.7.13-pve1~bpo2 >>> > > >>>>>>>>> >>> > > >>>>>>>>> >>> > > >>>>>>>>> Some help??? Sould I upgrade the server to 6.x?? >>> > > >>>>>>>>> >>> > > >>>>>>>>> Thanks >>> > > >>>>>>>>> >>> > > >>>>>>>>> --- >>> > > >>>>>>>>> Gilberto Nunes Ferreira >>> > > >>>>>>>>> >>> > > >>>>>>>>> (47) 3025-5907 >>> > > >>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram >>> > > >>>>>>>>> >>> > > >>>>>>>>> Skype: gilberto.nunes36 >>> > > >>>>>>>>> >>> > > >>>>>>>>> >>> > > >>>>>>>>> >>> > > >>>>>>>>> >>> > > >>>>>>>>> >>> > > >>>>>>>>> Em qui., 30 de jan. de 2020 ?s 10:10, Gilberto Nunes < >>> > > >>>>>>>>> gilberto.nunes32 at gmail.com> escreveu: >>> > > >>>>>>>>> >>> > > >>>>>>>>>> Hi there >>> > > >>>>>>>>>> >>> > > >>>>>>>>>> I got a strage error last night. Vzdump complain about the >>> > > >>>>>>>>>> disk no exist or lvm volume in this case but the volume >>> exist, >>> > > >>>> indeed! >>> > > >>>>>>>>>> In the morning I have do a manually backup and it's >>> working >>> > > >> fine... >>> > > >>>>>>>>>> Any advice? >>> > > >>>>>>>>>> >>> > > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112 >>> > (qemu) >>> > > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: status = running >>> > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup >>> > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: >>> cliente-V-112-IP-165 >>> > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0' >>> > > >>>>>>>> 'local-lvm:vm-112-disk-0' 120G >>> > > >>>>>>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - >>> no >>> > > such >>> > > >>>>>>>> volume 'local-lvm:vm-112-disk-0' >>> > > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116 >>> > (qemu) >>> > > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: status = running >>> > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup >>> > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162 >>> > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0' >>> > > >>>>>>>> 'local-lvm:vm-116-disk-0' 100G >>> > > >>>>>>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - >>> no >>> > > such >>> > > >>>>>>>> volume 'local-lvm:vm-116-disk-0' >>> > > >>>>>>>>>> --- >>> > > >>>>>>>>>> Gilberto Nunes Ferreira >>> > > >>>>>>>>>> >>> > > >>>>>>>>>> (47) 3025-5907 >>> > > >>>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram >>> > > >>>>>>>>>> >>> > > >>>>>>>>>> Skype: gilberto.nunes36 >>> > > >>>>>>>>>> >>> > > >>>>>>>>>> >>> > > >>>>>>>>>> >>> > > >>>>>>>>>> >>> > > >>>>>>>>> _______________________________________________ >>> > > >>>>>>>>> pve-user mailing list >>> > > >>>>>>>>> pve-user at pve.proxmox.com >>> > > >>>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > > >>>>>>>> -- >>> > > >>>>>>>> Zuzendari Teknikoa / Director T?cnico >>> > > >>>>>>>> Binovo IT Human Project, S.L. >>> > > >>>>>>>> Telf. 943569206 >>> > > >>>>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun >>> > > (Gipuzkoa) >>> > > >>>>>>>> www.binovo.es >>> > > >>>>>>>> >>> > > >>>>>>>> _______________________________________________ >>> > > >>>>>>>> pve-user mailing list >>> > > >>>>>>>> pve-user at pve.proxmox.com >>> > > >>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > > >>>>>>>> >>> > > >>>>>>> _______________________________________________ >>> > > >>>>>>> pve-user mailing list >>> > > >>>>>>> pve-user at pve.proxmox.com >>> > > >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > > >>>>>> -- >>> > > >>>>>> Zuzendari Teknikoa / Director T?cnico >>> > > >>>>>> Binovo IT Human Project, S.L. >>> > > >>>>>> Telf. 943569206 >>> > > >>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun >>> > (Gipuzkoa) >>> > > >>>>>> www.binovo.es >>> > > >>>>>> >>> > > >>>>>> _______________________________________________ >>> > > >>>>>> pve-user mailing list >>> > > >>>>>> pve-user at pve.proxmox.com >>> > > >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > > >>>>>> >>> > > >>>>> _______________________________________________ >>> > > >>>>> pve-user mailing list >>> > > >>>>> pve-user at pve.proxmox.com >>> > > >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > > >>>> -- >>> > > >>>> Zuzendari Teknikoa / Director T?cnico >>> > > >>>> Binovo IT Human Project, S.L. >>> > > >>>> Telf. 943569206 >>> > > >>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun >>> (Gipuzkoa) >>> > > >>>> www.binovo.es >>> > > >>>> >>> > > >>>> _______________________________________________ >>> > > >>>> pve-user mailing list >>> > > >>>> pve-user at pve.proxmox.com >>> > > >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > > >>>> >>> > > >>> _______________________________________________ >>> > > >>> pve-user mailing list >>> > > >>> pve-user at pve.proxmox.com >>> > > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > > >> >>> > > >> -- >>> > > >> Zuzendari Teknikoa / Director T?cnico >>> > > >> Binovo IT Human Project, S.L. >>> > > >> Telf. 943569206 >>> > > >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun >>> (Gipuzkoa) >>> > > >> www.binovo.es >>> > > >> >>> > > >> _______________________________________________ >>> > > >> pve-user mailing list >>> > > >> pve-user at pve.proxmox.com >>> > > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > > >> >>> > > > _______________________________________________ >>> > > > pve-user mailing list >>> > > > pve-user at pve.proxmox.com >>> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > > _______________________________________________ >>> > > pve-user mailing list >>> > > pve-user at pve.proxmox.com >>> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > > >>> > _______________________________________________ >>> > pve-user mailing list >>> > pve-user at pve.proxmox.com >>> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> > >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> From elacunza at binovo.es Mon Mar 9 13:35:57 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Mon, 9 Mar 2020 13:35:57 +0100 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: References: Message-ID: Hola Leandro, El 9/3/20 a las 13:21, Leandro Roggerone escribi?: > Thankyou Eneko , > Are features / performance difference between those formats ? Take a look at: https://pve.proxmox.com/wiki/Storage Performance-wise, they should be almost the same. qcow2 is a bit more complex than a raw file/partition, but I don't think you'll notice any impact. Maybe others can comment on this. Saludos Eneko > Regards, > Leandro. > > > Libre > de virus. www.avast.com > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza () > escribi?: > >> Hola Leandro, >> >> El 9/3/20 a las 13:11, Leandro Roggerone escribi?: >>> Hi guys, after install pve , would like to create my first VM. >>> I noticed that only available format is raw. >>> Question is: >>> Is qcow2 deprecated ? >>> What are differences between them ? (I already google it but is not 100% >>> clear). >>> >>> This is my storage configuration is. >>> root at pve:~# cat /etc/pve/storage.cfg >>> dir: local >>> path /var/lib/vz >>> content iso,vztmpl,backup >>> >>> lvmthin: local-lvm >>> thinpool data >>> vgname pve >>> content rootdir,images >>> >> You need a file-type storage (i.e. dir type) to be able to use qcow2 >> format. Default local-lvm is lvmthin and only logical volumes in raw >> format can be created (no raw files are created in fact). >> >> Cheers >> Eneko >> >> -- >> Zuzendari Teknikoa / Director T?cnico >> Binovo IT Human Project, S.L. >> Telf. 943569206 >> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >> www.binovo.es >> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From a.antreich at proxmox.com Mon Mar 9 14:13:26 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Mon, 9 Mar 2020 14:13:26 +0100 Subject: [PVE-User] SSD als osd neu initialisieren/wieder aufnehmen In-Reply-To: <105591386.nniJfEyVGO@ph-pc014.peiker-holding.de> References: <105591386.nniJfEyVGO@ph-pc014.peiker-holding.de> Message-ID: <20200309131326.GA1385284@dona.proxmox.com> Hallo Gregor, On Mon, Mar 09, 2020 at 01:07:20PM +0100, Gregor Burck wrote: > Moin, > > ich teste verschiedenes mit CEPH. > Dabei habe ich eine SSD ?ber Destroy entfernt. Wie genau den? Down -> Out -> Destroy, ?ber die GUI? > > Wie kann ich die SSD wieder in den Cluster aufnehmen? Beim Hinzuf?gen wird mir keine verf?gbare SSD angezeigt? > Unter Disks wurde die SSD auch noch als osd.4 bezeichnet. Darufhin habe ich auf der Konsole per dd den MBR ?berschrieben. Die Disk muss nach dem zerst?ren der OSD noch gel?scht werden. Ein dd (200 MB) und sgdisk -Z / sollte gen?gen. Im Betrieb kommt das eher selten vor, da die Platten dann meist gegen neue ausgetauscht werden. > > Jetzt taucht sie als Usage: Device Mapper auf und ich kann sie trotzdem nicht neu initialisieren. > Es ist auch noch ein systemd service vorhanden: ceph-osd at 4.service ?brig geblieben, oder ist die Unit vielleicht bereits disabled? Falls letzteres, dann w?rde sie bei einem Reboot entfernt werden. -- Cheers, Alwin From devzero at web.de Mon Mar 9 14:21:16 2020 From: devzero at web.de (Roland @web.de) Date: Mon, 9 Mar 2020 14:21:16 +0100 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: References: Message-ID: <730a9d7c-3a6e-9bf2-aaf2-a096482be817@web.de> >Performance-wise, they should be almost the same. qcow2 is a bit more complex than a raw file/partition, but I don't think you'll notice any impact. i did test that a while ago and got very different results, qcow2 was much slower on my system. cannot provide details, though (did not save that) i recommend reading https://serverfault.com/questions/407842/incredibly-low-kvm-disk-performance-qcow2-disk-files-virtio and compare yourself.... roland Am 09.03.20 um 13:35 schrieb Eneko Lacunza: > Hola Leandro, > > El 9/3/20 a las 13:21, Leandro Roggerone escribi?: >> Thankyou Eneko , >> Are features / performance difference between those formats ? > > Take a look at: > https://pve.proxmox.com/wiki/Storage > > Performance-wise, they should be almost the same. qcow2 is a bit more > complex than a raw file/partition, but I don't think you'll notice any > impact. Maybe others can comment on this. > > Saludos > Eneko > >> Regards, >> Leandro. >> >> >> >> Libre >> de virus. www.avast.com >> >> >> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> >> El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza () >> escribi?: >> >>> Hola Leandro, >>> >>> El 9/3/20 a las 13:11, Leandro Roggerone escribi?: >>>> Hi guys, after install pve , would like to create my first VM. >>>> I noticed that only available format is raw. >>>> Question is: >>>> Is qcow2 deprecated ? >>>> What are differences between them ? (I already google it but is not >>>> 100% >>>> clear). >>>> >>>> This is my storage configuration is. >>>> root at pve:~# cat /etc/pve/storage.cfg >>>> dir: local >>>> ????????? path /var/lib/vz >>>> ????????? content iso,vztmpl,backup >>>> >>>> lvmthin: local-lvm >>>> ????????? thinpool data >>>> ????????? vgname pve >>>> ????????? content rootdir,images >>>> >>> You need a file-type storage (i.e. dir type) to be able to use qcow2 >>> format. Default local-lvm is lvmthin and only logical volumes in raw >>> format can be created (no raw files are created in fact). >>> >>> Cheers >>> Eneko >>> >>> -- >>> Zuzendari Teknikoa / Director T?cnico >>> Binovo IT Human Project, S.L. >>> Telf. 943569206 >>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) >>> www.binovo.es >>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From gregor at aeppelbroe.de Mon Mar 9 14:25:41 2020 From: gregor at aeppelbroe.de (Gregor Burck) Date: Mon, 09 Mar 2020 14:25:41 +0100 Subject: [PVE-User] SSD als osd neu initialisieren/wieder aufnehmen In-Reply-To: <105591386.nniJfEyVGO@ph-pc014.peiker-holding.de> References: <105591386.nniJfEyVGO@ph-pc014.peiker-holding.de> Message-ID: <3297843.iIbC2pHGDl@ph-pc014.peiker-holding.de> Moin, evtl. noch als Informationen: Die osdis nicht mehr in der ceph crush map aufgef?hrt. Eine neu eingesetzte SSD k?nnte ich als osd einbinden. Diese wird auch unter Disk --> Usage mit No angezeigt. Gr??e Gregor From laurentfdumont at gmail.com Mon Mar 9 14:26:18 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 9 Mar 2020 09:26:18 -0400 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: <730a9d7c-3a6e-9bf2-aaf2-a096482be817@web.de> References: <730a9d7c-3a6e-9bf2-aaf2-a096482be817@web.de> Message-ID: This server fault post was written in 2012. I will do some test in the lab but those results seem really really low for a modern system with modern packages. On Mon, Mar 9, 2020, 9:21 AM Roland @web.de wrote: > >Performance-wise, they should be almost the same. qcow2 is a bit more > complex than a raw file/partition, but I don't think you'll notice any > impact. > > i did test that a while ago and got very different results, qcow2 was > much slower on my system. cannot provide details, though (did not save > that) > > i recommend reading > > https://serverfault.com/questions/407842/incredibly-low-kvm-disk-performance-qcow2-disk-files-virtio > and compare yourself.... > > roland > > Am 09.03.20 um 13:35 schrieb Eneko Lacunza: > > Hola Leandro, > > > > El 9/3/20 a las 13:21, Leandro Roggerone escribi?: > >> Thankyou Eneko , > >> Are features / performance difference between those formats ? > > > > Take a look at: > > https://pve.proxmox.com/wiki/Storage > > > > Performance-wise, they should be almost the same. qcow2 is a bit more > > complex than a raw file/partition, but I don't think you'll notice any > > impact. Maybe others can comment on this. > > > > Saludos > > Eneko > > > >> Regards, > >> Leandro. > >> > >> < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > >> > >> Libre > >> de virus. www.avast.com > >> < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > >> > >> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > >> > >> El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza () > >> escribi?: > >> > >>> Hola Leandro, > >>> > >>> El 9/3/20 a las 13:11, Leandro Roggerone escribi?: > >>>> Hi guys, after install pve , would like to create my first VM. > >>>> I noticed that only available format is raw. > >>>> Question is: > >>>> Is qcow2 deprecated ? > >>>> What are differences between them ? (I already google it but is not > >>>> 100% > >>>> clear). > >>>> > >>>> This is my storage configuration is. > >>>> root at pve:~# cat /etc/pve/storage.cfg > >>>> dir: local > >>>> path /var/lib/vz > >>>> content iso,vztmpl,backup > >>>> > >>>> lvmthin: local-lvm > >>>> thinpool data > >>>> vgname pve > >>>> content rootdir,images > >>>> > >>> You need a file-type storage (i.e. dir type) to be able to use qcow2 > >>> format. Default local-lvm is lvmthin and only logical volumes in raw > >>> format can be created (no raw files are created in fact). > >>> > >>> Cheers > >>> Eneko > >>> > >>> -- > >>> Zuzendari Teknikoa / Director T?cnico > >>> Binovo IT Human Project, S.L. > >>> Telf. 943569206 > >>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > >>> www.binovo.es > >>> > >>> _______________________________________________ > >>> pve-user mailing list > >>> pve-user at pve.proxmox.com > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >>> > >> _______________________________________________ > >> pve-user mailing list > >> pve-user at pve.proxmox.com > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gregor at aeppelbroe.de Mon Mar 9 14:35:05 2020 From: gregor at aeppelbroe.de (Gregor Burck) Date: Mon, 09 Mar 2020 14:35:05 +0100 Subject: [PVE-User] SSD als osd neu initialisieren/wieder aufnehmen In-Reply-To: <20200309131326.GA1385284@dona.proxmox.com> References: <105591386.nniJfEyVGO@ph-pc014.peiker-holding.de> <20200309131326.GA1385284@dona.proxmox.com> Message-ID: <4021555.ejJDZkT8p0@ph-pc014.peiker-holding.de> Moin, > Wie genau den? Down -> Out -> Destroy, ?ber die GUI? Yep ?ber die GUI > Die Disk muss nach dem zerst?ren der OSD noch gel?scht werden. Ein dd > (200 MB) und sgdisk -Z / sollte gen?gen. sgdisk kannte ich noch nicht, brachte aber erst mal keinen Unterschied. > Im Betrieb kommt das eher selten vor, da die Platten dann meist gegen > neue ausgetauscht werden. Klar, im Testbetrieb dann aber schon mal,... > ?brig geblieben, oder ist die Unit vielleicht bereits disabled? Falls > letzteres, dann w?rde sie bei einem Reboot entfernt werden. Mal sehen, wie es nach einem reboot aussieht. Gr??e Gregor From leandro at tecnetmza.com.ar Mon Mar 9 14:34:26 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Mon, 9 Mar 2020 10:34:26 -0300 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: References: <730a9d7c-3a6e-9bf2-aaf2-a096482be817@web.de> Message-ID: Is there any reason to use qcow2 format ? Im asking because I received qcow2 disk from an old system and need to upload into my new pve. I just want to know why people used this, my system currently does not support it. Leandro. Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> El lun., 9 mar. 2020 a las 10:27, Laurent Dumont () escribi?: > This server fault post was written in 2012. I will do some test in the lab > but those results seem really really low for a modern system with modern > packages. > > On Mon, Mar 9, 2020, 9:21 AM Roland @web.de wrote: > > > >Performance-wise, they should be almost the same. qcow2 is a bit more > > complex than a raw file/partition, but I don't think you'll notice any > > impact. > > > > i did test that a while ago and got very different results, qcow2 was > > much slower on my system. cannot provide details, though (did not save > > that) > > > > i recommend reading > > > > > https://serverfault.com/questions/407842/incredibly-low-kvm-disk-performance-qcow2-disk-files-virtio > > and compare yourself.... > > > > roland > > > > Am 09.03.20 um 13:35 schrieb Eneko Lacunza: > > > Hola Leandro, > > > > > > El 9/3/20 a las 13:21, Leandro Roggerone escribi?: > > >> Thankyou Eneko , > > >> Are features / performance difference between those formats ? > > > > > > Take a look at: > > > https://pve.proxmox.com/wiki/Storage > > > > > > Performance-wise, they should be almost the same. qcow2 is a bit more > > > complex than a raw file/partition, but I don't think you'll notice any > > > impact. Maybe others can comment on this. > > > > > > Saludos > > > Eneko > > > > > >> Regards, > > >> Leandro. > > >> > > >> < > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > >> > > >> Libre > > >> de virus. www.avast.com > > >> < > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > >> > > >> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > >> > > >> El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza () > > >> escribi?: > > >> > > >>> Hola Leandro, > > >>> > > >>> El 9/3/20 a las 13:11, Leandro Roggerone escribi?: > > >>>> Hi guys, after install pve , would like to create my first VM. > > >>>> I noticed that only available format is raw. > > >>>> Question is: > > >>>> Is qcow2 deprecated ? > > >>>> What are differences between them ? (I already google it but is not > > >>>> 100% > > >>>> clear). > > >>>> > > >>>> This is my storage configuration is. > > >>>> root at pve:~# cat /etc/pve/storage.cfg > > >>>> dir: local > > >>>> path /var/lib/vz > > >>>> content iso,vztmpl,backup > > >>>> > > >>>> lvmthin: local-lvm > > >>>> thinpool data > > >>>> vgname pve > > >>>> content rootdir,images > > >>>> > > >>> You need a file-type storage (i.e. dir type) to be able to use qcow2 > > >>> format. Default local-lvm is lvmthin and only logical volumes in raw > > >>> format can be created (no raw files are created in fact). > > >>> > > >>> Cheers > > >>> Eneko > > >>> > > >>> -- > > >>> Zuzendari Teknikoa / Director T?cnico > > >>> Binovo IT Human Project, S.L. > > >>> Telf. 943569206 > > >>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) > > >>> www.binovo.es > > >>> > > >>> _______________________________________________ > > >>> pve-user mailing list > > >>> pve-user at pve.proxmox.com > > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > >>> > > >> _______________________________________________ > > >> pve-user mailing list > > >> pve-user at pve.proxmox.com > > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> From laurentfdumont at gmail.com Mon Mar 9 14:37:37 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 9 Mar 2020 09:37:37 -0400 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: References: <730a9d7c-3a6e-9bf2-aaf2-a096482be817@web.de> Message-ID: I'm not sure if this still stands but you cannot do snapshots on a raw format, only qcow2. On Mon, Mar 9, 2020, 9:35 AM Leandro Roggerone wrote: > Is there any reason to use qcow2 format ? > Im asking because I received qcow2 disk from an old system and need to > upload into my new pve. > I just want to know why people used this, my system currently does not > support it. > Leandro. > > < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > Libre > de virus. www.avast.com > < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > El lun., 9 mar. 2020 a las 10:27, Laurent Dumont (< > laurentfdumont at gmail.com>) > escribi?: > > > This server fault post was written in 2012. I will do some test in the > lab > > but those results seem really really low for a modern system with modern > > packages. > > > > On Mon, Mar 9, 2020, 9:21 AM Roland @web.de wrote: > > > > > >Performance-wise, they should be almost the same. qcow2 is a bit more > > > complex than a raw file/partition, but I don't think you'll notice any > > > impact. > > > > > > i did test that a while ago and got very different results, qcow2 was > > > much slower on my system. cannot provide details, though (did not save > > > that) > > > > > > i recommend reading > > > > > > > > > https://serverfault.com/questions/407842/incredibly-low-kvm-disk-performance-qcow2-disk-files-virtio > > > and compare yourself.... > > > > > > roland > > > > > > Am 09.03.20 um 13:35 schrieb Eneko Lacunza: > > > > Hola Leandro, > > > > > > > > El 9/3/20 a las 13:21, Leandro Roggerone escribi?: > > > >> Thankyou Eneko , > > > >> Are features / performance difference between those formats ? > > > > > > > > Take a look at: > > > > https://pve.proxmox.com/wiki/Storage > > > > > > > > Performance-wise, they should be almost the same. qcow2 is a bit more > > > > complex than a raw file/partition, but I don't think you'll notice > any > > > > impact. Maybe others can comment on this. > > > > > > > > Saludos > > > > Eneko > > > > > > > >> Regards, > > > >> Leandro. > > > >> > > > >> < > > > > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > > > >> > > > >> Libre > > > >> de virus. www.avast.com > > > >> < > > > > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > > > >> > > > >> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > > >> > > > >> El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza ( >) > > > >> escribi?: > > > >> > > > >>> Hola Leandro, > > > >>> > > > >>> El 9/3/20 a las 13:11, Leandro Roggerone escribi?: > > > >>>> Hi guys, after install pve , would like to create my first VM. > > > >>>> I noticed that only available format is raw. > > > >>>> Question is: > > > >>>> Is qcow2 deprecated ? > > > >>>> What are differences between them ? (I already google it but is > not > > > >>>> 100% > > > >>>> clear). > > > >>>> > > > >>>> This is my storage configuration is. > > > >>>> root at pve:~# cat /etc/pve/storage.cfg > > > >>>> dir: local > > > >>>> path /var/lib/vz > > > >>>> content iso,vztmpl,backup > > > >>>> > > > >>>> lvmthin: local-lvm > > > >>>> thinpool data > > > >>>> vgname pve > > > >>>> content rootdir,images > > > >>>> > > > >>> You need a file-type storage (i.e. dir type) to be able to use > qcow2 > > > >>> format. Default local-lvm is lvmthin and only logical volumes in > raw > > > >>> format can be created (no raw files are created in fact). > > > >>> > > > >>> Cheers > > > >>> Eneko > > > >>> > > > >>> -- > > > >>> Zuzendari Teknikoa / Director T?cnico > > > >>> Binovo IT Human Project, S.L. > > > >>> Telf. 943569206 > > > >>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun > (Gipuzkoa) > > > >>> www.binovo.es > > > >>> > > > >>> _______________________________________________ > > > >>> pve-user mailing list > > > >>> pve-user at pve.proxmox.com > > > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > >>> > > > >> _______________________________________________ > > > >> pve-user mailing list > > > >> pve-user at pve.proxmox.com > > > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > > > > > _______________________________________________ > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > Libre > de virus. www.avast.com > < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From s.reiter at proxmox.com Mon Mar 9 14:55:52 2020 From: s.reiter at proxmox.com (Stefan Reiter) Date: Mon, 9 Mar 2020 14:55:52 +0100 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: References: <730a9d7c-3a6e-9bf2-aaf2-a096482be817@web.de> Message-ID: On 3/9/20 2:37 PM, Laurent Dumont wrote: > I'm not sure if this still stands but you cannot do snapshots on a raw > format, only qcow2. Correct, but note that this only applies to file-based raw storage, a "raw" volume on lvm-thin, ZFS, etc. does of course support snapshots. On file-based storages, qcow2 does indeed give you the benefit of snapshots over raw images, but comes with a slight overhead - usually negligible IMHO and from my experience, but that's something you should test yourself with your own workload if you need it. > > On Mon, Mar 9, 2020, 9:35 AM Leandro Roggerone > wrote: > >> Is there any reason to use qcow2 format ? >> Im asking because I received qcow2 disk from an old system and need to >> upload into my new pve. FYI: You can use 'qm importdisk' to attach existing disk images to a VM. This also allows to import qcow2 images to storages not supporting qcow2 as a format, e.g. LVM, ZFS, ... >> I just want to know why people used this, my system currently does not >> support it. >> Leandro. >> >> < >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>> >> Libre >> de virus. www.avast.com >> < >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>> >> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> >> El lun., 9 mar. 2020 a las 10:27, Laurent Dumont (< >> laurentfdumont at gmail.com>) >> escribi?: >> >>> This server fault post was written in 2012. I will do some test in the >> lab >>> but those results seem really really low for a modern system with modern >>> packages. >>> >>> On Mon, Mar 9, 2020, 9:21 AM Roland @web.de wrote: >>> >>>> >Performance-wise, they should be almost the same. qcow2 is a bit more >>>> complex than a raw file/partition, but I don't think you'll notice any >>>> impact. >>>> >>>> i did test that a while ago and got very different results, qcow2 was >>>> much slower on my system. cannot provide details, though (did not save >>>> that) >>>> >>>> i recommend reading >>>> >>>> >>> >> https://serverfault.com/questions/407842/incredibly-low-kvm-disk-performance-qcow2-disk-files-virtio >>>> and compare yourself.... >>>> >>>> roland >>>> >>>> Am 09.03.20 um 13:35 schrieb Eneko Lacunza: >>>>> Hola Leandro, >>>>> >>>>> El 9/3/20 a las 13:21, Leandro Roggerone escribi?: >>>>>> Thankyou Eneko , >>>>>> Are features / performance difference between those formats ? >>>>> >>>>> Take a look at: >>>>> https://pve.proxmox.com/wiki/Storage >>>>> >>>>> Performance-wise, they should be almost the same. qcow2 is a bit more >>>>> complex than a raw file/partition, but I don't think you'll notice >> any >>>>> impact. Maybe others can comment on this. >>>>> >>>>> Saludos >>>>> Eneko >>>>> >>>>>> Regards, >>>>>> Leandro. >>>>>> >>>>>> < >>>> >>> >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>>>> >>>>>> >>>>>> Libre >>>>>> de virus. www.avast.com >>>>>> < >>>> >>> >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>>>> >>>>>> >>>>>> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>>>>> >>>>>> El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza (>> ) >>>>>> escribi?: >>>>>> >>>>>>> Hola Leandro, >>>>>>> >>>>>>> El 9/3/20 a las 13:11, Leandro Roggerone escribi?: >>>>>>>> Hi guys, after install pve , would like to create my first VM. >>>>>>>> I noticed that only available format is raw. >>>>>>>> Question is: >>>>>>>> Is qcow2 deprecated ? >>>>>>>> What are differences between them ? (I already google it but is >> not >>>>>>>> 100% >>>>>>>> clear). >>>>>>>> >>>>>>>> This is my storage configuration is. >>>>>>>> root at pve:~# cat /etc/pve/storage.cfg >>>>>>>> dir: local >>>>>>>> path /var/lib/vz >>>>>>>> content iso,vztmpl,backup >>>>>>>> >>>>>>>> lvmthin: local-lvm >>>>>>>> thinpool data >>>>>>>> vgname pve >>>>>>>> content rootdir,images >>>>>>>> >>>>>>> You need a file-type storage (i.e. dir type) to be able to use >> qcow2 >>>>>>> format. Default local-lvm is lvmthin and only logical volumes in >> raw >>>>>>> format can be created (no raw files are created in fact). >>>>>>> >>>>>>> Cheers >>>>>>> Eneko >>>>>>> >>>>>>> -- >>>>>>> Zuzendari Teknikoa / Director T?cnico >>>>>>> Binovo IT Human Project, S.L. >>>>>>> Telf. 943569206 >>>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun >> (Gipuzkoa) >>>>>>> www.binovo.es >>>>>>> >>>>>>> _______________________________________________ >>>>>>> pve-user mailing list >>>>>>> pve-user at pve.proxmox.com >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>>>> >>>>>> _______________________________________________ >>>>>> pve-user mailing list >>>>>> pve-user at pve.proxmox.com >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>> >>>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at pve.proxmox.com >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> >> < >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>> >> Libre >> de virus. www.avast.com >> < >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>> >> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From a.antreich at proxmox.com Mon Mar 9 15:35:40 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Mon, 9 Mar 2020 15:35:40 +0100 Subject: [PVE-User] SSD als osd neu initialisieren/wieder aufnehmen In-Reply-To: <4021555.ejJDZkT8p0@ph-pc014.peiker-holding.de> References: <105591386.nniJfEyVGO@ph-pc014.peiker-holding.de> <20200309131326.GA1385284@dona.proxmox.com> <4021555.ejJDZkT8p0@ph-pc014.peiker-holding.de> Message-ID: <20200309143540.GB1385284@dona.proxmox.com> On Mon, Mar 09, 2020 at 02:35:05PM +0100, Gregor Burck wrote: > Moin, > > > Wie genau den? Down -> Out -> Destroy, ?ber die GUI? > Yep ?ber die GUI Am besten das 'cleanup disks' angeklickt lassen. Damit wird die Partitionstabelle und die ersten 200 MB entfernt. Oder per CLI mit '--cleanup'. > > > Die Disk muss nach dem zerst?ren der OSD noch gel?scht werden. Ein dd > > (200 MB) und sgdisk -Z / sollte gen?gen. > sgdisk kannte ich noch nicht, brachte aber erst mal keinen Unterschied. L?scht die GPT. -- Cheers, Alwin From leandro at tecnetmza.com.ar Mon Mar 9 15:37:09 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Mon, 9 Mar 2020 11:37:09 -0300 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: References: <730a9d7c-3a6e-9bf2-aaf2-a096482be817@web.de> Message-ID: Can not do snapshots ? but can do bakups ? is it necessary to turn off VM ? Regads. El lun., 9 mar. 2020 a las 10:39, Laurent Dumont () escribi?: > I'm not sure if this still stands but you cannot do snapshots on a raw > format, only qcow2. > > On Mon, Mar 9, 2020, 9:35 AM Leandro Roggerone > wrote: > > > Is there any reason to use qcow2 format ? > > Im asking because I received qcow2 disk from an old system and need to > > upload into my new pve. > > I just want to know why people used this, my system currently does not > > support it. > > Leandro. > > > > < > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > Libre > > de virus. www.avast.com > > < > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > > > El lun., 9 mar. 2020 a las 10:27, Laurent Dumont (< > > laurentfdumont at gmail.com>) > > escribi?: > > > > > This server fault post was written in 2012. I will do some test in the > > lab > > > but those results seem really really low for a modern system with > modern > > > packages. > > > > > > On Mon, Mar 9, 2020, 9:21 AM Roland @web.de wrote: > > > > > > > >Performance-wise, they should be almost the same. qcow2 is a bit > more > > > > complex than a raw file/partition, but I don't think you'll notice > any > > > > impact. > > > > > > > > i did test that a while ago and got very different results, qcow2 was > > > > much slower on my system. cannot provide details, though (did not > save > > > > that) > > > > > > > > i recommend reading > > > > > > > > > > > > > > https://serverfault.com/questions/407842/incredibly-low-kvm-disk-performance-qcow2-disk-files-virtio > > > > and compare yourself.... > > > > > > > > roland > > > > > > > > Am 09.03.20 um 13:35 schrieb Eneko Lacunza: > > > > > Hola Leandro, > > > > > > > > > > El 9/3/20 a las 13:21, Leandro Roggerone escribi?: > > > > >> Thankyou Eneko , > > > > >> Are features / performance difference between those formats ? > > > > > > > > > > Take a look at: > > > > > https://pve.proxmox.com/wiki/Storage > > > > > > > > > > Performance-wise, they should be almost the same. qcow2 is a bit > more > > > > > complex than a raw file/partition, but I don't think you'll notice > > any > > > > > impact. Maybe others can comment on this. > > > > > > > > > > Saludos > > > > > Eneko > > > > > > > > > >> Regards, > > > > >> Leandro. > > > > >> > > > > >> < > > > > > > > > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > > > > > >> > > > > >> Libre > > > > >> de virus. www.avast.com > > > > >> < > > > > > > > > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > > > > > >> > > > > >> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > > > >> > > > > >> El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza (< > elacunza at binovo.es > > >) > > > > >> escribi?: > > > > >> > > > > >>> Hola Leandro, > > > > >>> > > > > >>> El 9/3/20 a las 13:11, Leandro Roggerone escribi?: > > > > >>>> Hi guys, after install pve , would like to create my first VM. > > > > >>>> I noticed that only available format is raw. > > > > >>>> Question is: > > > > >>>> Is qcow2 deprecated ? > > > > >>>> What are differences between them ? (I already google it but is > > not > > > > >>>> 100% > > > > >>>> clear). > > > > >>>> > > > > >>>> This is my storage configuration is. > > > > >>>> root at pve:~# cat /etc/pve/storage.cfg > > > > >>>> dir: local > > > > >>>> path /var/lib/vz > > > > >>>> content iso,vztmpl,backup > > > > >>>> > > > > >>>> lvmthin: local-lvm > > > > >>>> thinpool data > > > > >>>> vgname pve > > > > >>>> content rootdir,images > > > > >>>> > > > > >>> You need a file-type storage (i.e. dir type) to be able to use > > qcow2 > > > > >>> format. Default local-lvm is lvmthin and only logical volumes in > > raw > > > > >>> format can be created (no raw files are created in fact). > > > > >>> > > > > >>> Cheers > > > > >>> Eneko > > > > >>> > > > > >>> -- > > > > >>> Zuzendari Teknikoa / Director T?cnico > > > > >>> Binovo IT Human Project, S.L. > > > > >>> Telf. 943569206 > > > > >>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun > > (Gipuzkoa) > > > > >>> www.binovo.es > > > > >>> > > > > >>> _______________________________________________ > > > > >>> pve-user mailing list > > > > >>> pve-user at pve.proxmox.com > > > > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > >>> > > > > >> _______________________________________________ > > > > >> pve-user mailing list > > > > >> pve-user at pve.proxmox.com > > > > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > > > > > > > > _______________________________________________ > > > > pve-user mailing list > > > > pve-user at pve.proxmox.com > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > _______________________________________________ > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > < > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > Libre > > de virus. www.avast.com > > < > > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From s.reiter at proxmox.com Mon Mar 9 15:43:32 2020 From: s.reiter at proxmox.com (Stefan Reiter) Date: Mon, 9 Mar 2020 15:43:32 +0100 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: References: <730a9d7c-3a6e-9bf2-aaf2-a096482be817@web.de> Message-ID: <253e34cf-9b9c-120a-3be0-5f2d7f818ed6@proxmox.com> On 3/9/20 3:37 PM, Leandro Roggerone wrote: > Can not do snapshots ? but can do bakups ? is it necessary to turn off VM ? > Regads. Backups in "snapshot" mode (i.e. without turning off the VM) are always possible, even on storages not supporting snapshots. > > El lun., 9 mar. 2020 a las 10:39, Laurent Dumont () > escribi?: > >> I'm not sure if this still stands but you cannot do snapshots on a raw >> format, only qcow2. >> >> On Mon, Mar 9, 2020, 9:35 AM Leandro Roggerone >> wrote: >> >>> Is there any reason to use qcow2 format ? >>> Im asking because I received qcow2 disk from an old system and need to >>> upload into my new pve. >>> I just want to know why people used this, my system currently does not >>> support it. >>> Leandro. >>> >>> < >>> >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>>> >>> Libre >>> de virus. www.avast.com >>> < >>> >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>>> >>> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>> >>> El lun., 9 mar. 2020 a las 10:27, Laurent Dumont (< >>> laurentfdumont at gmail.com>) >>> escribi?: >>> >>>> This server fault post was written in 2012. I will do some test in the >>> lab >>>> but those results seem really really low for a modern system with >> modern >>>> packages. >>>> >>>> On Mon, Mar 9, 2020, 9:21 AM Roland @web.de wrote: >>>> >>>>> >Performance-wise, they should be almost the same. qcow2 is a bit >> more >>>>> complex than a raw file/partition, but I don't think you'll notice >> any >>>>> impact. >>>>> >>>>> i did test that a while ago and got very different results, qcow2 was >>>>> much slower on my system. cannot provide details, though (did not >> save >>>>> that) >>>>> >>>>> i recommend reading >>>>> >>>>> >>>> >>> >> https://serverfault.com/questions/407842/incredibly-low-kvm-disk-performance-qcow2-disk-files-virtio >>>>> and compare yourself.... >>>>> >>>>> roland >>>>> >>>>> Am 09.03.20 um 13:35 schrieb Eneko Lacunza: >>>>>> Hola Leandro, >>>>>> >>>>>> El 9/3/20 a las 13:21, Leandro Roggerone escribi?: >>>>>>> Thankyou Eneko , >>>>>>> Are features / performance difference between those formats ? >>>>>> >>>>>> Take a look at: >>>>>> https://pve.proxmox.com/wiki/Storage >>>>>> >>>>>> Performance-wise, they should be almost the same. qcow2 is a bit >> more >>>>>> complex than a raw file/partition, but I don't think you'll notice >>> any >>>>>> impact. Maybe others can comment on this. >>>>>> >>>>>> Saludos >>>>>> Eneko >>>>>> >>>>>>> Regards, >>>>>>> Leandro. >>>>>>> >>>>>>> < >>>>> >>>> >>> >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>>>>> >>>>>>> >>>>>>> Libre >>>>>>> de virus. www.avast.com >>>>>>> < >>>>> >>>> >>> >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>>>>> >>>>>>> >>>>>>> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>>>>>> >>>>>>> El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza (< >> elacunza at binovo.es >>>> ) >>>>>>> escribi?: >>>>>>> >>>>>>>> Hola Leandro, >>>>>>>> >>>>>>>> El 9/3/20 a las 13:11, Leandro Roggerone escribi?: >>>>>>>>> Hi guys, after install pve , would like to create my first VM. >>>>>>>>> I noticed that only available format is raw. >>>>>>>>> Question is: >>>>>>>>> Is qcow2 deprecated ? >>>>>>>>> What are differences between them ? (I already google it but is >>> not >>>>>>>>> 100% >>>>>>>>> clear). >>>>>>>>> >>>>>>>>> This is my storage configuration is. >>>>>>>>> root at pve:~# cat /etc/pve/storage.cfg >>>>>>>>> dir: local >>>>>>>>> path /var/lib/vz >>>>>>>>> content iso,vztmpl,backup >>>>>>>>> >>>>>>>>> lvmthin: local-lvm >>>>>>>>> thinpool data >>>>>>>>> vgname pve >>>>>>>>> content rootdir,images >>>>>>>>> >>>>>>>> You need a file-type storage (i.e. dir type) to be able to use >>> qcow2 >>>>>>>> format. Default local-lvm is lvmthin and only logical volumes in >>> raw >>>>>>>> format can be created (no raw files are created in fact). >>>>>>>> >>>>>>>> Cheers >>>>>>>> Eneko >>>>>>>> >>>>>>>> -- >>>>>>>> Zuzendari Teknikoa / Director T?cnico >>>>>>>> Binovo IT Human Project, S.L. >>>>>>>> Telf. 943569206 >>>>>>>> Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun >>> (Gipuzkoa) >>>>>>>> www.binovo.es >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> pve-user mailing list >>>>>>>> pve-user at pve.proxmox.com >>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> pve-user mailing list >>>>>>> pve-user at pve.proxmox.com >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> pve-user mailing list >>>>> pve-user at pve.proxmox.com >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user at pve.proxmox.com >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> >>> >>> < >>> >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>>> >>> Libre >>> de virus. www.avast.com >>> < >>> >> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail >>>> >>> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>> _______________________________________________ >>> pve-user mailing list >>> pve-user at pve.proxmox.com >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>> >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gregor at aeppelbroe.de Mon Mar 9 17:29:31 2020 From: gregor at aeppelbroe.de (Gregor Burck) Date: Mon, 09 Mar 2020 17:29:31 +0100 Subject: [PVE-User] SSD als osd neu initialisieren/wieder aufnehmen In-Reply-To: <105591386.nniJfEyVGO@ph-pc014.peiker-holding.de> References: <105591386.nniJfEyVGO@ph-pc014.peiker-holding.de> Message-ID: <1793554.taCxCBeP46@ph-pc014.peiker-holding.de> Moin, es hat tats?chlich eine reboot der Maschine ben?tigt und die SSD war wieder frei und konnte neu eingebunden werden. Gr??e Gregor From 525308160 at qq.com Tue Mar 10 09:39:03 2020 From: 525308160 at qq.com (=?ISO-8859-1?B?amF4?=) Date: Tue, 10 Mar 2020 16:39:03 +0800 Subject: [PVE-User] pve and ovs-dpdk Message-ID: Hello how to use ovs with dpdk in pve5.4 version? I have tried to compile and install ovs and dpdk, but the ovs cannot be recognized on the web management page of pve, and then use apt-get install openvswitch-switch-dpdk to install the packaged ovs of dpdk. Although they can be identified, many configurations are not available. Finished, does pve have any related cases or document sharing? thank! From aderumier at odiso.com Tue Mar 10 12:09:33 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Tue, 10 Mar 2020 12:09:33 +0100 (CET) Subject: [PVE-User] pve and ovs-dpdk In-Reply-To: References: Message-ID: <449432802.4602284.1583838573855.JavaMail.zimbra@odiso.com> Hi, currently they are no support for dpdk in proxmox. I have send recently for proxmox6 ovs 2.12 (dpdk can be build too) but they are a other things to implement to use it. (vhost-user for example for qemu nic) you also need transparent hugepage enabled in the vm ----- Mail original ----- De: "jax" <525308160 at qq.com> ?: "proxmoxve" Envoy?: Mardi 10 Mars 2020 09:39:03 Objet: [PVE-User] pve and ovs-dpdk Hello how to use ovs with dpdk in pve5.4 version? I have tried to compile and install ovs and dpdk, but the ovs cannot be recognized on the web management page of pve, and then use apt-get install openvswitch-switch-dpdk to install the packaged ovs of dpdk. Although they can be identified, many configurations are not available. Finished, does pve have any related cases or document sharing? thank! _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From aderumier at odiso.com Tue Mar 10 12:11:56 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Tue, 10 Mar 2020 12:11:56 +0100 (CET) Subject: [PVE-User] Creating VM with thin provisioning (ESXi fashion) In-Reply-To: References: Message-ID: <65084650.4602324.1583838716692.JavaMail.zimbra@odiso.com> Hi, almost all storages are thin provisionned by default in proxmox (Create a new disk of 400Gb, it'll use 0 until you create datas on it) ----- Mail original ----- De: "Leandro Roggerone" ?: "proxmoxve" Envoy?: Lundi 9 Mars 2020 12:53:29 Objet: [PVE-User] Creating VM with thin provisioning (ESXi fashion) Sorry to compare ... between systems , I just want to get to the same function: When you create a VM on vmware you assign a max storage , for example 400Gb. Then you can see from vm dashboard that this VM is using 194Gb. But ,if I look inside the VM , you will see: [root at ftp ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 * 400G 0 disk* ??sda1 8:1 0 1G 0 part /boot ??sda2 8:2 0 398.9G 0 part ??centos-root 253:0 0 395G 0 lvm / ??centos-swap 253:1 0 3.9G 0 lvm [SWAP] sr0 11:0 1 918M 0 rom The VM "believes" it has a 400Gb drive, and can continue automatically growing up to those 400 GB but it is currently using only 194Gb. I would like to work same way on my PVE environment if possible .. or what is best approach. Leandro. Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From leandro at tecnetmza.com.ar Tue Mar 10 12:51:14 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Tue, 10 Mar 2020 08:51:14 -0300 Subject: [PVE-User] Creating VM with thin provisioning (ESXi fashion) In-Reply-To: <65084650.4602324.1583838716692.JavaMail.zimbra@odiso.com> References: <65084650.4602324.1583838716692.JavaMail.zimbra@odiso.com> Message-ID: Thanks!! Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> El mar., 10 mar. 2020 a las 8:12, Alexandre DERUMIER () escribi?: > Hi, > > almost all storages are thin provisionned by default in proxmox > > (Create a new disk of 400Gb, it'll use 0 until you create datas on it) > > > ----- Mail original ----- > De: "Leandro Roggerone" > ?: "proxmoxve" > Envoy?: Lundi 9 Mars 2020 12:53:29 > Objet: [PVE-User] Creating VM with thin provisioning (ESXi fashion) > > Sorry to compare ... between systems , I just want to get to the same > function: > When you create a VM on vmware you assign a max storage , for example > 400Gb. > Then you can see from vm dashboard that this VM is using 194Gb. > > But ,if I look inside the VM , you will see: > > [root at ftp ~]# lsblk > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > sda 8:0 0 * 400G 0 disk* > ??sda1 8:1 0 1G 0 part /boot > ??sda2 8:2 0 398.9G 0 part > ??centos-root 253:0 0 395G 0 lvm / > ??centos-swap 253:1 0 3.9G 0 lvm [SWAP] > sr0 11:0 1 918M 0 rom > > The VM "believes" it has a 400Gb drive, and can continue automatically > growing up to those 400 GB but it is currently using only 194Gb. > I would like to work same way on my PVE environment if possible .. or what > is best approach. > Leandro. > > < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> > > Libre > de virus. www.avast.com > < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From renato at aster-lab.com Wed Mar 11 07:35:21 2020 From: renato at aster-lab.com (Renato Gallo) Date: Wed, 11 Mar 2020 07:35:21 +0100 (CET) Subject: lzo files conundrum Message-ID: <637457951.15611.1583908521095.JavaMail.zimbra@aster-lab.com> Hello, we have two machines same system 100 Gb of virtual disk both have the disks on a zfspool. from the df -h point of view the disks use is quite similar. we backup those machine regularly during the weekend using the scheduled backup of proxmox from the datastore menu. we cannot understand why one machine have a .lzo file of 9.something gigas and the other has a backup .lzo file of around 40 gigas. can you explain this to us ? Renato Gallo From a.antreich at proxmox.com Wed Mar 11 08:55:13 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Wed, 11 Mar 2020 08:55:13 +0100 Subject: [PVE-User] lzo files conundrum In-Reply-To: References: Message-ID: <20200311075513.GD1385284@dona.proxmox.com> Hello Renato, On Wed, Mar 11, 2020 at 07:35:21AM +0100, Renato Gallo via pve-user wrote: > Date: Wed, 11 Mar 2020 07:35:21 +0100 (CET) > From: Renato Gallo > To: pve-user at pve.proxmox.com > Cc: g noto > Subject: lzo files conundrum > X-Mailer: Zimbra 8.8.15_GA_3829 (ZimbraWebClient - FF68 > (Linux)/8.8.15_GA_3829) > > Hello, > > we have two machines same system 100 Gb of virtual disk both have the disks on a zfspool. > from the df -h point of view the disks use is quite similar. > we backup those machine regularly during the weekend using the scheduled backup of proxmox from the datastore menu. > we cannot understand why one machine have a .lzo file of 9.something gigas and the other has a backup .lzo file of around 40 gigas. > can you explain this to us ? Possibly thin-provisioning. In the storage definition the _Thin provision_ needs to be set, to create thin-provisioned zVOLs from the start. And OFC, you will need to have the VM disks setup with SCSI and discard on to run a TRIM inside the VM. -- Cheers, Alwin From ronny+pve-user at aasen.cx Wed Mar 11 08:56:32 2020 From: ronny+pve-user at aasen.cx (Ronny Aasen) Date: Wed, 11 Mar 2020 08:56:32 +0100 Subject: [PVE-User] lzo files conundrum In-Reply-To: <20200311075513.GD1385284@dona.proxmox.com> References: <20200311075513.GD1385284@dona.proxmox.com> Message-ID: On 11.03.2020 08:55, Alwin Antreich wrote: > Hello Renato, > > On Wed, Mar 11, 2020 at 07:35:21AM +0100, Renato Gallo via pve-user wrote: >> Date: Wed, 11 Mar 2020 07:35:21 +0100 (CET) >> From: Renato Gallo >> To: pve-user at pve.proxmox.com >> Cc: g noto >> Subject: lzo files conundrum >> X-Mailer: Zimbra 8.8.15_GA_3829 (ZimbraWebClient - FF68 >> (Linux)/8.8.15_GA_3829) >> >> Hello, >> >> we have two machines same system 100 Gb of virtual disk both have the disks on a zfspool. >> from the df -h point of view the disks use is quite similar. >> we backup those machine regularly during the weekend using the scheduled backup of proxmox from the datastore menu. >> we cannot understand why one machine have a .lzo file of 9.something gigas and the other has a backup .lzo file of around 40 gigas. >> can you explain this to us ? > Possibly thin-provisioning. > > In the storage definition the _Thin provision_ needs to be set, to > create thin-provisioned zVOLs from the start. And OFC, you will need to > have the VM disks setup with SCSI and discard on to run a TRIM inside > the VM. > you can also run fstrim from inside the vm to manually trim filesystems. Ronny From renato at aster-lab.com Wed Mar 11 09:07:42 2020 From: renato at aster-lab.com (Renato Gallo) Date: Wed, 11 Mar 2020 09:07:42 +0100 (CET) Subject: [PVE-User] lzo files conundrum In-Reply-To: <1396743101.59299.1583913923448.JavaMail.zimbra@way-interactive.com> References: <20200311075513.GD1385284@dona.proxmox.com> <1396743101.59299.1583913923448.JavaMail.zimbra@way-interactive.com> Message-ID: <409237358.15712.1583914062943.JavaMail.zimbra@aster-lab.com> Hello, Might be I was not clear, I apologize. it's not that I want both backups to be equal. Both machines are set up as scsi and have equal storage configuration. Both machines sit on the same proxmox server. I would like to know why the backups gigas are so different one from another. Renato Gallo ----- Original Message ----- From: "Alwin Antreich" To: "PVE User List" Cc: "renato" , "Gaspare Noto" Sent: Wednesday, March 11, 2020 8:55:13 AM Subject: Re: [PVE-User] lzo files conundrum Hello Renato, On Wed, Mar 11, 2020 at 07:35:21AM +0100, Renato Gallo via pve-user wrote: > Date: Wed, 11 Mar 2020 07:35:21 +0100 (CET) > From: Renato Gallo > To: pve-user at pve.proxmox.com > Cc: g noto > Subject: lzo files conundrum > X-Mailer: Zimbra 8.8.15_GA_3829 (ZimbraWebClient - FF68 > (Linux)/8.8.15_GA_3829) > > Hello, > > we have two machines same system 100 Gb of virtual disk both have the disks on a zfspool. > from the df -h point of view the disks use is quite similar. > we backup those machine regularly during the weekend using the scheduled backup of proxmox from the datastore menu. > we cannot understand why one machine have a .lzo file of 9.something gigas and the other has a backup .lzo file of around 40 gigas. > can you explain this to us ? Possibly thin-provisioning. In the storage definition the _Thin provision_ needs to be set, to create thin-provisioned zVOLs from the start. And OFC, you will need to have the VM disks setup with SCSI and discard on to run a TRIM inside the VM. -- Cheers, Alwin From mark at openvs.co.uk Wed Mar 11 09:26:35 2020 From: mark at openvs.co.uk (Mark Adams) Date: Wed, 11 Mar 2020 08:26:35 +0000 Subject: [PVE-User] lzo files conundrum In-Reply-To: References: <20200311075513.GD1385284@dona.proxmox.com> <1396743101.59299.1583913923448.JavaMail.zimbra@way-interactive.com> Message-ID: Is the data inside the VM's different? maybe the data on the bigger one is not as compressible? On Wed, 11 Mar 2020, 08:07 Renato Gallo via pve-user, < pve-user at pve.proxmox.com> wrote: > > > > ---------- Forwarded message ---------- > From: Renato Gallo > To: g noto > Cc: Alwin Antreich , pve-user < > pve-user at pve.proxmox.com> > Bcc: > Date: Wed, 11 Mar 2020 09:07:42 +0100 (CET) > Subject: Re: [PVE-User] lzo files conundrum > Hello, > > Might be I was not clear, I apologize. > it's not that I want both backups to be equal. > Both machines are set up as scsi and have equal storage configuration. > Both machines sit on the same proxmox server. > I would like to know why the backups gigas are so different one from > another. > > Renato Gallo > > > > ----- Original Message ----- > From: "Alwin Antreich" > To: "PVE User List" > Cc: "renato" , "Gaspare Noto" < > g.noto at way-interactive.com> > Sent: Wednesday, March 11, 2020 8:55:13 AM > Subject: Re: [PVE-User] lzo files conundrum > > Hello Renato, > > On Wed, Mar 11, 2020 at 07:35:21AM +0100, Renato Gallo via pve-user wrote: > > Date: Wed, 11 Mar 2020 07:35:21 +0100 (CET) > > From: Renato Gallo > > To: pve-user at pve.proxmox.com > > Cc: g noto > > Subject: lzo files conundrum > > X-Mailer: Zimbra 8.8.15_GA_3829 (ZimbraWebClient - FF68 > > (Linux)/8.8.15_GA_3829) > > > > Hello, > > > > we have two machines same system 100 Gb of virtual disk both have the > disks on a zfspool. > > from the df -h point of view the disks use is quite similar. > > we backup those machine regularly during the weekend using the scheduled > backup of proxmox from the datastore menu. > > we cannot understand why one machine have a .lzo file of 9.something > gigas and the other has a backup .lzo file of around 40 gigas. > > can you explain this to us ? > Possibly thin-provisioning. > > In the storage definition the _Thin provision_ needs to be set, to > create thin-provisioned zVOLs from the start. And OFC, you will need to > have the VM disks setup with SCSI and discard on to run a TRIM inside > the VM. > > -- > Cheers, > Alwin > > > > ---------- Forwarded message ---------- > From: Renato Gallo via pve-user > To: g noto > Cc: Renato Gallo , pve-user < > pve-user at pve.proxmox.com> > Bcc: > Date: Wed, 11 Mar 2020 09:07:42 +0100 (CET) > Subject: Re: [PVE-User] lzo files conundrum > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gregor at aeppelbroe.de Wed Mar 11 10:57:28 2020 From: gregor at aeppelbroe.de (Gregor Burck) Date: Wed, 11 Mar 2020 10:57:28 +0100 Subject: [PVE-User] Better understanding CEPH Pool definition Message-ID: <1717059.atdPhlSkOF@ph-pc014.peiker-holding.de> Hi, I've still problems to understand the pooling definition Size/min in ceph and what it means to us. We've a 3 node cluster with 4 SSDs (the smallest sinfull setup in the documention). When I define a ceph pool with 3/2, all implied images requiere the triple storrage. When I define a ceph pool with 2/1, all implied images requiere the double storrage. Means the Size/min the quantity of knodes, over wich the data is distributed? But when I take a knode in maintenance, on the 3/2 setup are still 2 units of each images available, on 2/1 maybe only one. On a 3/2 setup the filesystem get readonly when 2 nodes are down in the same time, on a 2/1 setup it could be the storage is not available any more? I try to understand the impact for us, when we set two pools, one with 3/2 for important VMs and one with 2/1 for VMs wich could stand a while off, or 've no claim to be every time up to date or so,.... Bye Gregor From a.antreich at proxmox.com Wed Mar 11 14:12:23 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Wed, 11 Mar 2020 14:12:23 +0100 Subject: [PVE-User] Better understanding CEPH Pool definition In-Reply-To: <1717059.atdPhlSkOF@ph-pc014.peiker-holding.de> References: <1717059.atdPhlSkOF@ph-pc014.peiker-holding.de> Message-ID: <20200311131223.GF1385284@dona.proxmox.com> Hello Gregor, On Wed, Mar 11, 2020 at 10:57:28AM +0100, Gregor Burck wrote: > Hi, > > I've still problems to understand the pooling definition Size/min in ceph and what it means to us. > > We've a 3 node cluster with 4 SSDs (the smallest sinfull setup in the documention). :) > > When I define a ceph pool with 3/2, all implied images requiere the triple storrage. > When I define a ceph pool with 2/1, all implied images requiere the double storrage. Oi. :) Never go with 2/1. See below for the explanation. > > Means the Size/min the quantity of knodes, over wich the data is distributed? Not quite. The default distribution level is 'host', see the crush map [0]. Ceph will not place any PG on the same host again. So a node can fail without losing all replicas at once. > > But when I take a knode in maintenance, on the 3/2 setup are still 2 units of each images available, on 2/1 maybe only one. > On a 3/2 setup the filesystem get readonly when 2 nodes are down in the same time, on a 2/1 setup it could be the storage is not available any more? size, gives Ceph the amount of how many replicas it needs to create to get to an healthy state. Or in other words how often an object should be duplicated. min_size, up to how many replicas need to exist to still allow writes. If any PG in a pool drops below that, the pool will be placed in read-only. You can change those values for the pool [1] later on as well. It will create some extra IO on the cluster. > > I try to understand the impact for us, when we set two pools, one with 3/2 for important VMs and one with 2/1 for VMs wich could stand a while off, or 've no claim to be every time up to date or so,.... Taken from above, never go with X/1, especially in small clusters. While in a failure state with min_size = 1, the risk of losing the remaining copy on a subsequent failure or while in-flight (update) is quite high. For the later pool run it with 2/2. This way there will be always two copies needed, while not using the extra space for the third replica. The down side is OFC, that the pool will be unavailable if it has less than 2 replicas. -- Cheers, Alwin [0] https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/ [1] https://docs.ceph.com/docs/nautilus/rados/operations/pools/#set-pool-values From lindsay.mathieson at gmail.com Thu Mar 12 09:08:19 2020 From: lindsay.mathieson at gmail.com (Lindsay Mathieson) Date: Thu, 12 Mar 2020 18:08:19 +1000 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: References: Message-ID: <49b1d958-5813-3a8a-fdf3-6c5480b456c0@gmail.com> On 9/03/2020 10:35 pm, Eneko Lacunza wrote: > Performance-wise, they should be almost the same. qcow2 is a bit more > complex than a raw file/partition, but I don't think you'll notice any > impact. Maybe others can comment on this. qcow2 supports snapshots natively, raw requires the underlying storage (such as Ceph) to support snapshots. -- Lindsay From elacunza at binovo.es Thu Mar 12 09:52:51 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 12 Mar 2020 09:52:51 +0100 Subject: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1 In-Reply-To: References: <311d496a-35b6-6a77-f3be-3e604eb51619@binovo.es> Message-ID: <251e1ff9-f339-e728-e195-d193e47e92cf@binovo.es> Hi all, El 24/2/20 a las 10:10, Eneko Lacunza escribi?: > > El 20/2/20 a las 14:47, Eneko Lacunza escribi?: >> We tried running the main VM backup yesterday morning, but couldn't >>>> reproduce the issue, although during regular backup all 3 nodes are >>>> doing backups and in the test we only performed the backup of the only >>>> VM storaged on SSD pool. >>>> >>>> >>>> How about reducing (or scheduling at different times) the backup >>>> jobs on >>>> each node, at least for testing if the backup is causing the problem. >> I'll check with the site admin about this, didn't really think about >> this but could help determine if that is the issue, thanks! > We have skipped backups for the "server" VM and we had no disconnects > this weekend. Tried launching backup manually, no disconnect either. > > We'll schedule backup for that VM at a different time to avoid the > issue... > We're sure now backups weren't a problem. Seems we're having time sync issues on the site, both with VMs and with Proxmox nodes... Thanks a lot Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From leandro at tecnetmza.com.ar Thu Mar 12 12:40:55 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Thu, 12 Mar 2020 08:40:55 -0300 Subject: [PVE-User] qcow2 vs raw format In-Reply-To: <49b1d958-5813-3a8a-fdf3-6c5480b456c0@gmail.com> References: <49b1d958-5813-3a8a-fdf3-6c5480b456c0@gmail.com> Message-ID: Thanks! El jue., 12 mar. 2020 a las 5:09, Lindsay Mathieson (< lindsay.mathieson at gmail.com>) escribi?: > On 9/03/2020 10:35 pm, Eneko Lacunza wrote: > > Performance-wise, they should be almost the same. qcow2 is a bit more > > complex than a raw file/partition, but I don't think you'll notice any > > impact. Maybe others can comment on this. > > > qcow2 supports snapshots natively, raw requires the underlying storage > (such as Ceph) to support snapshots. > > -- > Lindsay > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From f.thommen at dkfz-heidelberg.de Thu Mar 12 16:35:05 2020 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Thu, 12 Mar 2020 16:35:05 +0100 Subject: [PVE-User] UIDs > 65535 not valid in container Message-ID: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> Dear all, we have a strange issue with a CentOS 7 container running on PVE 6.1-3, that UIDs > 65535 are invalid. The container is used as a "SSH jumphost" to access a special network: Users log in to the host and SSH to the special network from there. sssd is running in the container. The directory service is an Active Directory. However users with UID > 65535 cannot login: /var/log/secure: [...] Mar 12 13:48:32 XXXXXX sshd[1021]: fatal: seteuid 86544: Invalid argument [...] and chown isn't possible either: $ chown 65535 /home/test $ chown 65536 /home/test chown: changing ownership of ?/home/test?: Invalid argument $ There are no problems with such UIDs on any other systems and there is no problem with users with an UID <= 65535 within the container. I fear this might be a container-related issue but I don't understand it and I don't know if there is a solution or a workaround. Any help or hint is highly appreciated Frank From dietmar at proxmox.com Thu Mar 12 17:57:07 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Thu, 12 Mar 2020 17:57:07 +0100 (CET) Subject: [PVE-User] UIDs > 65535 not valid in container In-Reply-To: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> References: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> Message-ID: <1507175116.32.1584032227937@webmail.proxmox.com> > I fear > this might be a container-related issue but I don't understand it and I > don't know if there is a solution or a workaround. > > Any help or hint is highly appreciated Yes, we only map 65535 IDs for a single container. We cannot allow the full range for security reasons. From daniel at firewall-services.com Thu Mar 12 18:10:46 2020 From: daniel at firewall-services.com (Daniel Berteaud) Date: Thu, 12 Mar 2020 18:10:46 +0100 (CET) Subject: [PVE-User] UIDs > 65535 not valid in container In-Reply-To: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> References: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> Message-ID: <1973784351.18476.1584033046467.JavaMail.zimbra@fws.fr> ----- Le 12 Mar 20, ? 16:35, Frank Thommen f.thommen at dkfz-heidelberg.de a ?crit : > Dear all, > > we have a strange issue with a CentOS 7 container running on PVE 6.1-3, > that UIDs > 65535 are invalid. The container is used as a "SSH > jumphost" to access a special network: Users log in to the host and SSH > to the special network from there. sssd is running in the container. The > directory service is an Active Directory. > > However users with UID > 65535 cannot login: > > /var/log/secure: > [...] > Mar 12 13:48:32 XXXXXX sshd[1021]: fatal: seteuid 86544: Invalid argument > [...] > > > and chown isn't possible either: > > $ chown 65535 /home/test > $ chown 65536 /home/test > chown: changing ownership of ?/home/test?: Invalid argument > $ > > > There are no problems with such UIDs on any other systems and there is > no problem with users with an UID <= 65535 within the container. I fear > this might be a container-related issue but I don't understand it and I > don't know if there is a solution or a workaround. > > Any help or hint is highly appreciated You can work with higher UID in LXC with this : * Edit /etc/subuid and change the range. Eg root:100000:4000390000 * Do the same for /etc/subgid * Edit your container config (/etc/pve/lxc/XXX.conf) and add lxc.idmap: u 0 100000 2000200000 lxc.idmap: g 0 100000 2000200000 That's the values I'm using for some AD members containers. Note however that native PVE restore code might refuse to work with those UID (I recall the 65535 max UID hardcoded somewhere in the restore path, but can't remember exactly where) ++ -- [ https://www.firewall-services.com/ ] Daniel Berteaud FIREWALL-SERVICES SAS, La s?curit? des r?seaux Soci?t? de Services en Logiciels Libres T?l : +33.5 56 64 15 32 Matrix: @dani:fws.fr [ https://www.firewall-services.com/ | https://www.firewall-services.com ] From f.thommen at dkfz-heidelberg.de Thu Mar 12 19:55:25 2020 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Thu, 12 Mar 2020 19:55:25 +0100 Subject: [PVE-User] UIDs > 65535 not valid in container In-Reply-To: <1973784351.18476.1584033046467.JavaMail.zimbra@fws.fr> References: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> <1973784351.18476.1584033046467.JavaMail.zimbra@fws.fr> Message-ID: <0da5af4a-84b7-352a-7f7e-81ff6bfbe94f@dkfz-heidelberg.de> On 3/12/20 6:10 PM, Daniel Berteaud wrote: > > > ----- Le 12 Mar 20, ? 16:35, Frank Thommen f.thommen at dkfz-heidelberg.de a ?crit : > >> Dear all, >> >> we have a strange issue with a CentOS 7 container running on PVE 6.1-3, >> that UIDs > 65535 are invalid. The container is used as a "SSH >> jumphost" to access a special network: Users log in to the host and SSH >> to the special network from there. sssd is running in the container. The >> directory service is an Active Directory. >> >> However users with UID > 65535 cannot login: >> >> /var/log/secure: >> [...] >> Mar 12 13:48:32 XXXXXX sshd[1021]: fatal: seteuid 86544: Invalid argument >> [...] >> >> >> and chown isn't possible either: >> >> $ chown 65535 /home/test >> $ chown 65536 /home/test >> chown: changing ownership of ?/home/test?: Invalid argument >> $ >> >> >> There are no problems with such UIDs on any other systems and there is >> no problem with users with an UID <= 65535 within the container. I fear >> this might be a container-related issue but I don't understand it and I >> don't know if there is a solution or a workaround. >> >> Any help or hint is highly appreciated > > You can work with higher UID in LXC with this : > > * Edit /etc/subuid and change the range. Eg > > root:100000:4000390000 > > * Do the same for /etc/subgid > * Edit your container config (/etc/pve/lxc/XXX.conf) and add > > lxc.idmap: u 0 100000 2000200000 > lxc.idmap: g 0 100000 2000200000 > > That's the values I'm using for some AD members containers. Note however that native PVE restore code might refuse to work with those UID (I recall the 65535 max UID hardcoded somewhere in the restore path, but can't remember exactly where) Unfortunately that doesn't work. The container will not start any more with the following messages in the debug log (shortened): ------------------------------------------------ [...] lxc-start 101 20200312185335.631 INFO conf - conf.c:run_script_argv:372 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "101", config section "lxc" lxc-start 101 20200312185336.964 DEBUG conf - conf.c:run_buffer:340 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: unable to detect OS distribution lxc-start: 101: conf.c: run_buffer: 352 Script exited with status 2 lxc-start: 101: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start for container "101" lxc-start: 101: start.c: __lxc_start: 2032 Failed to initialize container "101" Segmentation fault ------------------------------------------------ Frank From f.thommen at dkfz-heidelberg.de Thu Mar 12 19:58:17 2020 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Thu, 12 Mar 2020 19:58:17 +0100 Subject: [PVE-User] UIDs > 65535 not valid in container In-Reply-To: <1507175116.32.1584032227937@webmail.proxmox.com> References: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> <1507175116.32.1584032227937@webmail.proxmox.com> Message-ID: <59654435-b07c-3dc0-bbe5-8729c12954e4@dkfz-heidelberg.de> On 3/12/20 5:57 PM, Dietmar Maurer wrote: >> I fear >> this might be a container-related issue but I don't understand it and I >> don't know if there is a solution or a workaround. >> >> Any help or hint is highly appreciated > > Yes, we only map 65535 IDs for a single container. We cannot allow > the full range for security reasons. What is the security related impact of higher UIDs? This is kind of a showstopper for us, as we planned several such minimal services which all need to be able to map all existing UIDs in the AD. The idea was to move them away from heavy full VMs to more lightweight containers. Frank From f.thommen at dkfz-heidelberg.de Fri Mar 13 14:13:54 2020 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Fri, 13 Mar 2020 14:13:54 +0100 Subject: [PVE-User] UIDs > 65535 not valid in container In-Reply-To: <59654435-b07c-3dc0-bbe5-8729c12954e4@dkfz-heidelberg.de> References: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> <1507175116.32.1584032227937@webmail.proxmox.com> <59654435-b07c-3dc0-bbe5-8729c12954e4@dkfz-heidelberg.de> Message-ID: On 3/12/20 7:58 PM, Frank Thommen wrote: > On 3/12/20 5:57 PM, Dietmar Maurer wrote: >>> I fear >>> this might be a container-related issue but I don't understand it and I >>> don't know if there is a solution or a workaround. >>> >>> Any help or hint is highly appreciated >> >> Yes, we only map 65535 IDs for a single container. We cannot allow >> the full range for security reasons. > > What is the security related impact of higher UIDs?? This is kind of a > showstopper for us, as we planned several such minimal services which > all need to be able to map all existing UIDs in the AD. > > The idea was to move them away from heavy full VMs to more lightweight > containers. Or the other way round: What are the risks if we change the hardcoded limits in /usr/share/perl5/PVE/LXC.pm? (apart from the fact, that we will have to port the changes after each update and upgrade) frank From leesteken at pm.me Sun Mar 15 09:27:45 2020 From: leesteken at pm.me (leesteken at pm.me) Date: Sun, 15 Mar 2020 08:27:45 +0000 Subject: systemd networking.service fails after last dist-upgrade Message-ID: Dear Proxmox user list, My network and internet connections work fine but I'm getting a "State: degraded" from systemctl status, since an apt-get dist-upgrade yesterday. That Proxmox (6.1-7) update did rewrite my /etc/network/interface configuration file, but it looked harmless: #cat /etc/network/interfaces auto lo iface lo inet loopback iface enp4s0 inet manual auto vmbr0 iface vmbr0 inet static address 172.17.2.6/24 gateway 172.17.2.1 bridge-ports enp4s0 bridge-stp off bridge-fd 0 hwaddress 52:54:56:17:02:06 However, today after a reboot, I'm getting the following errors: #systemctl status networking.service ? networking.service - Network initialization Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Sun 2020-03-15 09:15:50 CET; 25s ago Docs: man:interfaces(5) man:ifup(8) man:ifdown(8) Process: 13819 ExecStart=/sbin/ifup -a (code=exited, status=0/SUCCESS) Process: 13894 ExecStart=/sbin/ifup --allow=ovs (code=exited, status=1/FAILURE) Main PID: 13894 (code=exited, status=1/FAILURE) #systemctl restart networking.service mrt 15 09:15:50 sentry systemd[1]: Starting Network initialization... mrt 15 09:15:50 sentry systemd[1]: Reloading Postfix Mail Transport Agent (instance -). mrt 15 09:15:50 sentry postfix/postfix-script[13854]: refreshing the Postfix mail system mrt 15 09:15:50 sentry postfix/master[4874]: reload -- version 3.4.8, configuration /etc/postfix mrt 15 09:15:50 sentry systemd[1]: Reloaded Postfix Mail Transport Agent (instance -). mrt 15 09:15:50 sentry systemd[1]: Reloading Postfix Mail Transport Agent. mrt 15 09:15:50 sentry systemd[1]: Reloaded Postfix Mail Transport Agent. mrt 15 09:15:50 sentry kernel: vmbr0: the hash_elasticity option has been deprecated and is always 16 mrt 15 09:15:50 sentry systemd[1]: Reloading Postfix Mail Transport Agent (instance -). mrt 15 09:15:50 sentry postfix/postfix-script[13885]: refreshing the Postfix mail system mrt 15 09:15:50 sentry postfix/master[4874]: reload -- version 3.4.8, configuration /etc/postfix mrt 15 09:15:50 sentry systemd[1]: Reloaded Postfix Mail Transport Agent (instance -). mrt 15 09:15:50 sentry systemd[1]: Reloading Postfix Mail Transport Agent. mrt 15 09:15:50 sentry systemd[1]: Reloaded Postfix Mail Transport Agent. mrt 15 09:15:50 sentry networking[13894]: error: main exception: no ifaces found matching given allow lists mrt 15 09:15:50 sentry systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE mrt 15 09:15:50 sentry systemd[1]: networking.service: Failed with result 'exit-code'. mrt 15 09:15:50 sentry systemd[1]: Failed to start Network initialization. Can anyone see what I have done wrong? Has someone experienced the same issue? Is there anything I should check or correct? (I have IPv6 disabled in sysctl because I don't use it.) kind regards, Arjen From aderumier at odiso.com Sun Mar 15 15:35:15 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Sun, 15 Mar 2020 15:35:15 +0100 (CET) Subject: [PVE-User] systemd networking.service fails after last dist-upgrade In-Reply-To: References: Message-ID: <1581010327.5274695.1584282915236.JavaMail.zimbra@odiso.com> Hi, edit /lib/systemd/system/networking.service and remove exec line with "ExecStart=/sbin/ifup --allow=ovs" It was a fix for openvswitch in ifupdown2, but it was fixed another way. I have sent a mail to pve-devel to remove the patch. thansk for reporting this. ----- Mail original ----- De: "proxmoxve" ?: "proxmoxve" Cc: leesteken at pm.me Envoy?: Dimanche 15 Mars 2020 09:27:45 Objet: [PVE-User] systemd networking.service fails after last dist-upgrade _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From leesteken at pm.me Mon Mar 16 11:22:28 2020 From: leesteken at pm.me (leesteken at pm.me) Date: Mon, 16 Mar 2020 10:22:28 +0000 Subject: [PVE-User] systemd networking.service fails after last dist-upgrade In-Reply-To: <1581010327.5274695.1584282915236.JavaMail.zimbra@odiso.com> References: <1581010327.5274695.1584282915236.JavaMail.zimbra@odiso.com> Message-ID: Removing the ExecStart (and ExecStop) lines that contain --allow=ovs indeed fixed the issue. I don't have any OVS packages installed, which is probably why it triggered on my setup. I'll revert the changes, use 'systemctl reset-failed networking.service' as a work-around and wait for the official fix. Thank you for the quick response! ??????? Original Message ??????? On Sunday, March 15, 2020 3:35 PM, Alexandre DERUMIER wrote: > Hi, > edit /lib/systemd/system/networking.service > > and remove exec line with "ExecStart=/sbin/ifup --allow=ovs" > > It was a fix for openvswitch in ifupdown2, but it was fixed another way. > > I have sent a mail to pve-devel to remove the patch. > > thansk for reporting this. > > ----- Mail original ----- > De: "proxmoxve" pve-user at pve.proxmox.com > ?: "proxmoxve" pve-user at pve.proxmox.com > Cc: leesteken at pm.me > Envoy?: Dimanche 15 Mars 2020 09:27:45 > Objet: [PVE-User] systemd networking.service fails after last dist-upgrade > > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From aderumier at odiso.com Mon Mar 16 12:57:13 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Mon, 16 Mar 2020 12:57:13 +0100 (CET) Subject: [PVE-User] systemd networking.service fails after last dist-upgrade In-Reply-To: References: <1581010327.5274695.1584282915236.JavaMail.zimbra@odiso.com> Message-ID: <1893442320.5316055.1584359833324.JavaMail.zimbra@odiso.com> fixed package uploaded today http://download.proxmox.com/debian/pve/dists/buster/pve-no-subscription/binary-amd64/ifupdown2_2.0.1-1%2Bpve8_all.deb ----- Mail original ----- De: "proxmoxve" ?: "proxmoxve" Cc: "leesteken" Envoy?: Lundi 16 Mars 2020 11:22:28 Objet: Re: [PVE-User] systemd networking.service fails after last dist-upgrade _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From humbertos at ifsc.edu.br Mon Mar 16 15:11:15 2020 From: humbertos at ifsc.edu.br (Humberto Jose De Sousa) Date: Mon, 16 Mar 2020 11:11:15 -0300 (BRT) Subject: proxmox don't detect more than 26 disks ( /dev/sdXX) Message-ID: <199025569.25565883.1584367875495.JavaMail.zimbra@ifsc.edu.br> Hi there. Until pve-kernel-4.15.18-23-pve all disks did detected. After this kernel version, only disks with format /dev/sdX was detected. Disks with format /dev/sdXX don't are detected. Any one has a fix to this? Humberto A host running pve-kernel-4.15.18-23-pve : root at proxmoxblade2:~# dmesg | grep -i "Attached scsi disk" [ 2.992486] sd 4:1:0:0: [sda] Attached SCSI disk [ 4.352963] sd 5:0:0:5: [sdg] Attached SCSI disk [ 4.353851] sd 5:0:1:5: [sdv] Attached SCSI disk [ 4.354109] sd 5:0:0:6: [sdh] Attached SCSI disk [ 4.354302] sd 3:0:0:5: [sdt] Attached SCSI disk [ 4.356791] sd 3:0:0:6: [sdu] Attached SCSI disk [ 4.356997] sd 3:0:0:1: [sdl] Attached SCSI disk [ 4.357536] sd 5:0:0:0: [sdb] Attached SCSI disk [ 4.358123] sd 3:0:0:0: [sdk] Attached SCSI disk [ 4.358500] sd 5:0:0:1: [sdc] Attached SCSI disk [ 4.360485] sd 5:0:1:6: [sdx] Attached SCSI disk [ 4.360496] sd 5:0:0:2: [sdd] Attached SCSI disk [ 4.360508] sd 3:0:1:1: [sdaa] Attached SCSI disk [ 4.361332] sd 3:0:0:7: [sdw] Attached SCSI disk [ 4.366430] sd 3:0:1:5: [sdae] Attached SCSI disk [ 4.366539] sd 3:0:1:2: [sdab] Attached SCSI disk [ 4.368119] sd 5:0:0:3: [sde] Attached SCSI disk [ 4.368546] sd 3:0:0:3: [sdp] Attached SCSI disk [ 4.368747] sd 5:0:0:7: [sdi] Attached SCSI disk [ 4.370122] sd 3:0:0:2: [sdn] Attached SCSI disk [ 4.370974] sd 3:0:1:0: [sdy] Attached SCSI disk [ 4.372401] sd 5:0:1:7: [sdz] Attached SCSI disk [ 4.372699] sd 3:0:1:7: [sdag] Attached SCSI disk [ 4.372858] sd 3:0:1:6: [sdaf] Attached SCSI disk [ 4.373097] sd 5:0:0:4: [sdf] Attached SCSI disk [ 4.373284] sd 5:0:1:0: [sdj] Attached SCSI disk [ 4.373394] sd 5:0:1:4: [sds] Attached SCSI disk [ 4.373399] sd 5:0:1:1: [sdm] Attached SCSI disk [ 4.373462] sd 5:0:1:2: [sdo] Attached SCSI disk [ 4.373669] sd 3:0:1:3: [sdac] Attached SCSI disk [ 4.373742] sd 3:0:1:4: [sdad] Attached SCSI disk [ 4.373808] sd 3:0:0:4: [sdr] Attached SCSI disk [ 4.373849] sd 5:0:1:3: [sdq] Attached SCSI disk A host running pve-kernel-4.15.18-26-pve : root at proxmoxblade1:~# dmesg | grep -i "Attached scsi disk" [ 3.208364] sd 1:1:0:0: [sda] Attached SCSI disk [ 4.966731] sd 2:0:1:1: [sdj] Attached SCSI disk [ 4.966851] sd 2:0:0:1: [sdc] Attached SCSI disk [ 4.968964] sd 2:0:1:0: [sdi] Attached SCSI disk [ 4.970252] sd 2:0:0:0: [sdb] Attached SCSI disk [ 4.973387] sd 2:0:1:6: [sdo] Attached SCSI disk [ 4.973624] sd 2:0:0:6: [sdh] Attached SCSI disk [ 4.974611] sd 2:0:0:5: [sdg] Attached SCSI disk [ 4.975113] sd 2:0:1:5: [sdn] Attached SCSI disk [ 4.978017] sd 2:0:1:2: [sdk] Attached SCSI disk [ 4.978243] sd 2:0:0:2: [sdd] Attached SCSI disk [ 4.990168] sd 2:0:1:3: [sdl] Attached SCSI disk [ 4.990538] sd 2:0:0:3: [sde] Attached SCSI disk [ 4.991909] sd 2:0:0:4: [sdf] Attached SCSI disk [ 4.991912] sd 2:0:1:4: [sdm] Attached SCSI disk [ 5.961717] sd 5:0:0:0: [sdp] Attached SCSI disk [ 5.962277] sd 5:0:0:1: [sdq] Attached SCSI disk [ 5.962778] sd 5:0:0:2: [sdr] Attached SCSI disk [ 5.964837] sd 5:0:0:3: [sds] Attached SCSI disk [ 5.965380] sd 5:0:0:4: [sdt] Attached SCSI disk [ 5.966490] sd 5:0:0:5: [sdu] Attached SCSI disk [ 5.967749] sd 5:0:0:6: [sdv] Attached SCSI disk From d.csapak at proxmox.com Mon Mar 16 15:23:39 2020 From: d.csapak at proxmox.com (Dominik Csapak) Date: Mon, 16 Mar 2020 15:23:39 +0100 Subject: [PVE-User] proxmox don't detect more than 26 disks ( /dev/sdXX) In-Reply-To: References: Message-ID: <91cccb94-fc8c-23be-8403-2dcec790375a@proxmox.com> On 3/16/20 3:11 PM, Humberto Jose De Sousa via pve-user wrote: > Hi there. > > Until pve-kernel-4.15.18-23-pve all disks did detected. After this kernel version, only disks with format /dev/sdX was detected. Disks with format /dev/sdXX don't are detected. > from the output it seems that your pci devices '03:00' and '04:00' do not show any disk anymore whats the output of 'lspci -k' on both kernels? From humbertos at ifsc.edu.br Mon Mar 16 15:50:07 2020 From: humbertos at ifsc.edu.br (Humberto Jose De Sousa) Date: Mon, 16 Mar 2020 11:50:07 -0300 (BRT) Subject: [PVE-User] proxmox don't detect more than 26 disks ( /dev/sdXX) In-Reply-To: <91cccb94-fc8c-23be-8403-2dcec790375a@proxmox.com> References: <91cccb94-fc8c-23be-8403-2dcec790375a@proxmox.com> Message-ID: <427869192.25596447.1584370207191.JavaMail.zimbra@ifsc.edu.br> pve-kernel-4.15.18-23-pve root at proxmoxblade2:~# lspci -k 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890 Northbridge only dual slot (2x16) PCI-e GFX Hydra part (rev 02) Subsystem: Hewlett-Packard Company RD890 Northbridge only dual slot (2x16) PCI-e GFX Hydra part 00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD/ATI] RD890S/RD990 I/O Memory Management Unit (IOMMU) Subsystem: Hewlett-Packard Company RD890S/RD990 I/O Memory Management Unit (IOMMU) 00:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GPP Port 0) Kernel driver in use: pcieport Kernel modules: shpchp 00:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GPP Port 5) Kernel driver in use: pcieport Kernel modules: shpchp 00:0b.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD990 PCI to PCI bridge (PCI Express GFX2 port 0) Kernel driver in use: pcieport Kernel modules: shpchp 00:0c.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD990 PCI to PCI bridge (PCI Express GFX2 port 1) Kernel driver in use: pcieport Kernel modules: shpchp 00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] Kernel driver in use: ahci Kernel modules: ahci 00:12.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Kernel driver in use: ohci-pci 00:12.1 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0 USB OHCI1 Controller Subsystem: Hewlett-Packard Company SB7x0 USB OHCI1 Controller Kernel driver in use: ohci-pci 00:12.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB EHCI Controller Kernel driver in use: ehci-pci 00:13.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Kernel driver in use: ohci-pci 00:13.1 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0 USB OHCI1 Controller Subsystem: Hewlett-Packard Company SB7x0 USB OHCI1 Controller Kernel driver in use: ohci-pci 00:13.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB EHCI Controller Kernel driver in use: ehci-pci 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 SMBus Controller (rev 3d) Subsystem: Hewlett-Packard Company SBx00 SMBus Controller Kernel driver in use: piix4_smbus Kernel modules: i2c_piix4, sp5100_tco 00:14.1 IDE interface: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 IDE Controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 IDE Controller Kernel driver in use: pata_atiixp Kernel modules: pata_atiixp, pata_acpi 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 LPC host controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 LPC host controller 00:14.4 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 PCI to PCI Bridge 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 Kernel driver in use: k10temp Kernel modules: k10temp 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 Kernel driver in use: fam15h_power Kernel modules: fam15h_power 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 Kernel driver in use: k10temp Kernel modules: k10temp 00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 Kernel modules: fam15h_power 00:19.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 00:1a.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 00:1a.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 00:1a.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 00:1a.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 Kernel driver in use: k10temp Kernel modules: k10temp 00:1a.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 Kernel driver in use: fam15h_power Kernel modules: fam15h_power 00:1a.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 00:1b.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 00:1b.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 00:1b.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 00:1b.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 Kernel driver in use: k10temp Kernel modules: k10temp 00:1b.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 Kernel modules: fam15h_power 00:1b.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 02:00.0 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Slave Instrumentation & System Support (rev 05) Subsystem: Hewlett-Packard Company iLO4 Kernel modules: hpwdt 02:00.1 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200EH Subsystem: Hewlett-Packard Company iLO4 Kernel driver in use: mgag200 Kernel modules: mgag200 02:00.2 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Management Processor Support and Messaging (rev 05) Subsystem: Hewlett-Packard Company iLO4 Kernel driver in use: hpilo Kernel modules: hpilo 02:00.4 USB controller: Hewlett-Packard Company Integrated Lights-Out Standard Virtual USB Controller (rev 02) Subsystem: Hewlett-Packard Company iLO4 Kernel driver in use: uhci_hcd 03:00.0 RAID bus controller: Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01) Subsystem: Hewlett-Packard Company P220i Kernel driver in use: hpsa Kernel modules: hpsa 04:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] Kernel driver in use: bnx2x Kernel modules: bnx2x 04:00.1 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] Kernel driver in use: bnx2x Kernel modules: bnx2x 04:00.2 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] Kernel driver in use: bnx2x Kernel modules: bnx2x 04:00.3 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] Kernel driver in use: bnx2x Kernel modules: bnx2x 05:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) Subsystem: Hewlett-Packard Company ISP2532-based 8Gb Fibre Channel to PCI Express HBA Kernel driver in use: qla2xxx Kernel modules: qla2xxx 05:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) Subsystem: Hewlett-Packard Company ISP2532-based 8Gb Fibre Channel to PCI Express HBA Kernel driver in use: qla2xxx Kernel modules: qla2xxx pve-kernel-4.15.18-26-pve root at proxmoxblade1:~# lspci -k 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890 Northbridge only dual slot (2x16) PCI-e GFX Hydra part (rev 02) Subsystem: Hewlett-Packard Company RD890 Northbridge only dual slot (2x16) PCI-e GFX Hydra part 00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD/ATI] RD890S/RD990 I/O Memory Management Unit (IOMMU) Subsystem: Hewlett-Packard Company RD890S/RD990 I/O Memory Management Unit (IOMMU) 00:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GPP Port 0) Kernel driver in use: pcieport Kernel modules: shpchp 00:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GPP Port 5) Kernel driver in use: pcieport Kernel modules: shpchp 00:0b.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD990 PCI to PCI bridge (PCI Express GFX2 port 0) Kernel driver in use: pcieport Kernel modules: shpchp 00:0c.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD990 PCI to PCI bridge (PCI Express GFX2 port 1) Kernel driver in use: pcieport Kernel modules: shpchp 00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] Kernel driver in use: ahci Kernel modules: ahci 00:12.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Kernel driver in use: ohci-pci 00:12.1 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0 USB OHCI1 Controller Subsystem: Hewlett-Packard Company SB7x0 USB OHCI1 Controller Kernel driver in use: ohci-pci 00:12.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB EHCI Controller Kernel driver in use: ehci-pci 00:13.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Kernel driver in use: ohci-pci 00:13.1 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0 USB OHCI1 Controller Subsystem: Hewlett-Packard Company SB7x0 USB OHCI1 Controller Kernel driver in use: ohci-pci 00:13.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB EHCI Controller Kernel driver in use: ehci-pci 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 SMBus Controller (rev 3d) Subsystem: Hewlett-Packard Company SBx00 SMBus Controller Kernel driver in use: piix4_smbus Kernel modules: i2c_piix4, sp5100_tco 00:14.1 IDE interface: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 IDE Controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 IDE Controller Kernel driver in use: pata_atiixp Kernel modules: pata_atiixp, pata_acpi 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 LPC host controller Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 LPC host controller 00:14.4 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 PCI to PCI Bridge 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 Kernel driver in use: k10temp Kernel modules: k10temp 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 Kernel driver in use: fam15h_power Kernel modules: fam15h_power 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 Kernel driver in use: k10temp Kernel modules: k10temp 00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 Kernel modules: fam15h_power 00:19.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 00:1a.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 00:1a.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 00:1a.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 00:1a.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 Kernel driver in use: k10temp Kernel modules: k10temp 00:1a.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 Kernel driver in use: fam15h_power Kernel modules: fam15h_power 00:1a.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 00:1b.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 00:1b.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 00:1b.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 00:1b.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 Kernel driver in use: k10temp Kernel modules: k10temp 00:1b.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 Kernel modules: fam15h_power 00:1b.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 02:00.0 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Slave Instrumentation & System Support (rev 05) Subsystem: Hewlett-Packard Company iLO4 Kernel modules: hpwdt 02:00.1 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200EH Subsystem: Hewlett-Packard Company iLO4 Kernel driver in use: mgag200 Kernel modules: mgag200 02:00.2 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Management Processor Support and Messaging (rev 05) Subsystem: Hewlett-Packard Company iLO4 Kernel driver in use: hpilo Kernel modules: hpilo 02:00.4 USB controller: Hewlett-Packard Company Integrated Lights-Out Standard Virtual USB Controller (rev 02) Subsystem: Hewlett-Packard Company iLO4 Kernel driver in use: uhci_hcd 03:00.0 RAID bus controller: Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01) Subsystem: Hewlett-Packard Company P220i Kernel driver in use: hpsa Kernel modules: hpsa 04:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] Kernel driver in use: bnx2x Kernel modules: bnx2x 04:00.1 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] Kernel driver in use: bnx2x Kernel modules: bnx2x 04:00.2 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] Kernel driver in use: bnx2x Kernel modules: bnx2x 04:00.3 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] Kernel driver in use: bnx2x Kernel modules: bnx2x 05:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) Subsystem: Hewlett-Packard Company ISP2532-based 8Gb Fibre Channel to PCI Express HBA Kernel driver in use: qla2xxx Kernel modules: qla2xxx 05:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) Subsystem: Hewlett-Packard Company ISP2532-based 8Gb Fibre Channel to PCI Express HBA Kernel driver in use: qla2xxx Kernel modules: qla2xxx De: "Dominik Csapak" Para: pve-user at pve.proxmox.com Enviadas: Segunda-feira, 16 de mar?o de 2020 11:23:39 Assunto: Re: [PVE-User] proxmox don't detect more than 26 disks ( /dev/sdXX) On 3/16/20 3:11 PM, Humberto Jose De Sousa via pve-user wrote: > Hi there. > > Until pve-kernel-4.15.18-23-pve all disks did detected. After this kernel version, only disks with format /dev/sdX was detected. Disks with format /dev/sdXX don't are detected. > from the output it seems that your pci devices '03:00' and '04:00' do not show any disk anymore whats the output of 'lspci -k' on both kernels? _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From f.thommen at dkfz-heidelberg.de Tue Mar 17 09:15:34 2020 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Tue, 17 Mar 2020 09:15:34 +0100 Subject: [PVE-User] UIDs > 65535 not valid in container In-Reply-To: References: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> <1507175116.32.1584032227937@webmail.proxmox.com> <59654435-b07c-3dc0-bbe5-8729c12954e4@dkfz-heidelberg.de> Message-ID: Dear all, On 13.03.20 14:13, Frank Thommen wrote: > On 3/12/20 7:58 PM, Frank Thommen wrote: >> On 3/12/20 5:57 PM, Dietmar Maurer wrote: >>>> I fear >>>> this might be a container-related issue but I don't understand it and I >>>> don't know if there is a solution or a workaround. >>>> >>>> Any help or hint is highly appreciated >>> >>> Yes, we only map 65535 IDs for a single container. We cannot allow >>> the full range for security reasons. >> >> What is the security related impact of higher UIDs?? This is kind of a >> showstopper for us, as we planned several such minimal services which >> all need to be able to map all existing UIDs in the AD. >> >> The idea was to move them away from heavy full VMs to more lightweight >> containers. > > Or the other way round: What are the risks if we change the hardcoded > limits in /usr/share/perl5/PVE/LXC.pm? (apart from the fact, that we > will have to port the changes after each update and upgrade) Does anyone have an assessment of the risk we would run? I still don't understand the security implications of the mapping of higher UIDs. However this is quickly becoming a major issue for us. Cheers Frank From dietmar at proxmox.com Tue Mar 17 09:33:27 2020 From: dietmar at proxmox.com (Dietmar Maurer) Date: Tue, 17 Mar 2020 09:33:27 +0100 (CET) Subject: [PVE-User] UIDs > 65535 not valid in container In-Reply-To: References: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> <1507175116.32.1584032227937@webmail.proxmox.com> <59654435-b07c-3dc0-bbe5-8729c12954e4@dkfz-heidelberg.de> Message-ID: <1374699376.34.1584434007501@webmail.proxmox.com> > Does anyone have an assessment of the risk we would run? I still don't > understand the security implications of the mapping of higher UIDs. > However this is quickly becoming a major issue for us. The risk is that it is not supported by us. Thus, we do not test that and I do not know what problems this may trigger... From f.thommen at dkfz-heidelberg.de Tue Mar 17 09:45:50 2020 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Tue, 17 Mar 2020 09:45:50 +0100 Subject: [PVE-User] UIDs > 65535 not valid in container In-Reply-To: <1374699376.34.1584434007501@webmail.proxmox.com> References: <33dfd8a6-bdc9-cdfd-04f8-dc75708c52e4@dkfz-heidelberg.de> <1507175116.32.1584032227937@webmail.proxmox.com> <59654435-b07c-3dc0-bbe5-8729c12954e4@dkfz-heidelberg.de> <1374699376.34.1584434007501@webmail.proxmox.com> Message-ID: On 17.03.20 09:33, Dietmar Maurer wrote: >> Does anyone have an assessment of the risk we would run? I still don't >> understand the security implications of the mapping of higher UIDs. >> However this is quickly becoming a major issue for us. > > The risk is that it is not supported by us. Thus, we do not > test that and I do not know what problems this may trigger... > ok. I will take the risk then, because w/o that mapping we cannot use the containers. Thanks Frank From krienke at uni-koblenz.de Tue Mar 17 14:04:22 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Tue, 17 Mar 2020 14:04:22 +0100 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness Message-ID: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> Hello, I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic disks. The pool with rbd images for disk storage is erasure coded with a 4+2 profile. I ran some performance tests since I noticed that there seems to be a strange limit to the disk read/write rate on a single VM even if the physical machine hosting the VM as well as cluster is in total capable of doing much more. So what I did was to run a bonnie++ as well as a dd read/write test first in parallel on 10 VMs, then on 5 VMs and at last on a single one. A value of "75" for "bo++rd" in the first line below means that each of the 10 bonnie++-processes running on 10 different proxmox VMs in parallel reported in average over all the results a value of 75MBytes/sec for "block read". The ceph-values are the peaks measured by ceph itself during the test run (all rd/wr values in MBytes/sec): VM-count: bo++rd: bo++wr: ceph(rd/wr): dd-rd: dd-wr: ceph(rd/wr): 10 75 42 540/485 55 58 698/711 5 90 62 310/338 47 80 248/421 1 108 114 111/120 130 145 337/165 What I find a little strange is that running many VMs doing IO in parallel I reach a write rate of about 485-711 MBytes/sec. However when running a single VM the maximum is at 120-165 MBytes/sec. Since the whole networking is based on a 10GB infrastructure and an iperf test between a VM and a ceph node reported nearby 10Gb I would expect a higher rate for the single VM. Even if I run a test with 5 VMs on *one* physical host (values not shown above), the results are not far behind the values for 5 VMs on 5 hosts shown above. So the single host seems not to be the limiting factor, but the VM itself is limiting IO. What rates do you find on your proxmox/ceph cluster for single VMs? Does any one have any explanation for this rather big difference or perhaps an idea what to try in order to get higher IO-rates from a single VM? Thank you very much in advance Rainer --------------------------------------------- Here are the more detailed test results for anyone interested: Using bonnie++: 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, BTRFS, cd /root; bonnie++ -u root Average for each VM: block write: ~42MByte/sec, block read: ~75MByte/sec ceph: total peak: 485MByte/sec write, 540MByte/sec read 5 VMs (one on each of the 5 hosts) 4GB RAM, BTRFS, cd /root; bonnie++ -u root Average for each VM: block write: ~62MByte/sec, block read: ~90MByte/sec ceph: total peak: 338MByte/sec write, 310MByte/sec read 1 VM 4GB RAM, BTRFS, cd /root; bonnie++ -u root Average for VM: block write: ~114 MByte/sec, block read: ~108MByte/sec ceph: total peak: 120 MByte/sec write, 111MByte/sec read Using dd: 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd) write: dd if=/dev/zero of=/dev/sdb bs=nnn count=kkk conv=fsync status=progress read: dd of=/dev/null if=/dev/sdb bs=nnn count=kkk status=progress Average for each VM: bs=1024k count=12000: dd write: ~58MByte/sec, dd read: ~48MByte/sec bs=4096k count=3000: dd write: ~59MByte/sec, dd read: ~55MByte/sec ceph: total peak: 711MByte/sec write, 698 MByte/sec read 5 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd) write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync status=progress read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress Average for each VM: bs=4096 count=3000: dd write: ~80 MByte/sec, dd read: ~47MByte/sec ceph: total peak: 421MByte/sec write, 248 MByte/sec read 1 VM: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd-device) write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync status=progress read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress Average for each VM: bs=4096k count=3000: dd write: ~145 MByte/sec, dd read: ~130 MByte/sec ceph: total peak: 165 MByte/sec write, 337 MByte/sec read -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 From elacunza at binovo.es Tue Mar 17 14:10:25 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Tue, 17 Mar 2020 14:10:25 +0100 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> Message-ID: <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> Hi, You can try to enable IO threads and assign multiple Ceph disks to the VM, then build some kind of raid0 to increase performance. Generally speaking, a SSD based Ceph cluster is considered to perform well when a VM gets about 2000 IOPS, and factors like CPU 1-thread performance, network and disk have to be selected with care. Also server's energy saving disabled, etc. What CPUs in those 9 nodes? Ceph is built for parallel access and scaling. You're only using 1 thread of your VM for disk IO currently. Cheers Eneko El 17/3/20 a las 14:04, Rainer Krienke escribi?: > Hello, > > I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb > Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic > disks. The pool with rbd images for disk storage is erasure coded with a > 4+2 profile. > > I ran some performance tests since I noticed that there seems to be a > strange limit to the disk read/write rate on a single VM even if the > physical machine hosting the VM as well as cluster is in total capable > of doing much more. > > So what I did was to run a bonnie++ as well as a dd read/write test > first in parallel on 10 VMs, then on 5 VMs and at last on a single one. > > A value of "75" for "bo++rd" in the first line below means that each of > the 10 bonnie++-processes running on 10 different proxmox VMs in > parallel reported in average over all the results a value of > 75MBytes/sec for "block read". The ceph-values are the peaks measured by > ceph itself during the test run (all rd/wr values in MBytes/sec): > > VM-count: bo++rd: bo++wr: ceph(rd/wr): dd-rd: dd-wr: ceph(rd/wr): > 10 75 42 540/485 55 58 698/711 > 5 90 62 310/338 47 80 248/421 > 1 108 114 111/120 130 145 337/165 > > > What I find a little strange is that running many VMs doing IO in > parallel I reach a write rate of about 485-711 MBytes/sec. However when > running a single VM the maximum is at 120-165 MBytes/sec. Since the > whole networking is based on a 10GB infrastructure and an iperf test > between a VM and a ceph node reported nearby 10Gb I would expect a > higher rate for the single VM. Even if I run a test with 5 VMs on *one* > physical host (values not shown above), the results are not far behind > the values for 5 VMs on 5 hosts shown above. So the single host seems > not to be the limiting factor, but the VM itself is limiting IO. > > What rates do you find on your proxmox/ceph cluster for single VMs? > Does any one have any explanation for this rather big difference or > perhaps an idea what to try in order to get higher IO-rates from a > single VM? > > Thank you very much in advance > Rainer > > > > --------------------------------------------- > Here are the more detailed test results for anyone interested: > > Using bonnie++: > 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, BTRFS, cd /root; > bonnie++ -u root > Average for each VM: > block write: ~42MByte/sec, block read: ~75MByte/sec > ceph: total peak: 485MByte/sec write, 540MByte/sec read > > 5 VMs (one on each of the 5 hosts) 4GB RAM, BTRFS, cd /root; bonnie++ -u > root > Average for each VM: > block write: ~62MByte/sec, block read: ~90MByte/sec > ceph: total peak: 338MByte/sec write, 310MByte/sec read > > 1 VM 4GB RAM, BTRFS, cd /root; bonnie++ -u root > Average for VM: > block write: ~114 MByte/sec, block read: ~108MByte/sec > ceph: total peak: 120 MByte/sec write, 111MByte/sec read > > > Using dd: > 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based > vm-disk "sdb" (rbd) > write: dd if=/dev/zero of=/dev/sdb bs=nnn count=kkk conv=fsync > status=progress > read: dd of=/dev/null if=/dev/sdb bs=nnn count=kkk status=progress > Average for each VM: > bs=1024k count=12000: dd write: ~58MByte/sec, dd read: ~48MByte/sec > bs=4096k count=3000: dd write: ~59MByte/sec, dd read: ~55MByte/sec > ceph: total peak: 711MByte/sec write, 698 MByte/sec read > > 5 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based > vm-disk "sdb" (rbd) > write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync > status=progress > read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress > Average for each VM: > bs=4096 count=3000: dd write: ~80 MByte/sec, dd read: ~47MByte/sec > ceph: total peak: 421MByte/sec write, 248 MByte/sec read > > 1 VM: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd-device) > write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync > status=progress > read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress > Average for each VM: > bs=4096k count=3000: dd write: ~145 MByte/sec, dd read: ~130 MByte/sec > ceph: total peak: 165 MByte/sec write, 337 MByte/sec read -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From ralf.storm at konzept-is.de Tue Mar 17 14:53:52 2020 From: ralf.storm at konzept-is.de (Ralf Storm) Date: Tue, 17 Mar 2020 14:53:52 +0100 Subject: [PVE-User] [SPAM] Proxmox with ceph storage VM performance strangeness In-Reply-To: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> Message-ID: Hello Rainer, same issue here 7nodes Cluster with Proxmox and Ceph on the same nodes seperate 10gb for Ceph and 10gb for VMs, not erasure coded, about 50 ssd. Performance for Backups, recovery etc. is almost 1000MByte/s, several vms accessing data at the same time is raising the perfomance also to the expected levels, but single vm access is never above 150MByte... Could not exaxctly figure out why in my searches... regards Ralf Am 17/03/2020 um 14:04 schrieb Rainer Krienke: > Hello, > > I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb > Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic > disks. The pool with rbd images for disk storage is erasure coded with a > 4+2 profile. > > I ran some performance tests since I noticed that there seems to be a > strange limit to the disk read/write rate on a single VM even if the > physical machine hosting the VM as well as cluster is in total capable > of doing much more. > > So what I did was to run a bonnie++ as well as a dd read/write test > first in parallel on 10 VMs, then on 5 VMs and at last on a single one. > > A value of "75" for "bo++rd" in the first line below means that each of > the 10 bonnie++-processes running on 10 different proxmox VMs in > parallel reported in average over all the results a value of > 75MBytes/sec for "block read". The ceph-values are the peaks measured by > ceph itself during the test run (all rd/wr values in MBytes/sec): > > VM-count: bo++rd: bo++wr: ceph(rd/wr): dd-rd: dd-wr: ceph(rd/wr): > 10 75 42 540/485 55 58 698/711 > 5 90 62 310/338 47 80 248/421 > 1 108 114 111/120 130 145 337/165 > > > What I find a little strange is that running many VMs doing IO in > parallel I reach a write rate of about 485-711 MBytes/sec. However when > running a single VM the maximum is at 120-165 MBytes/sec. Since the > whole networking is based on a 10GB infrastructure and an iperf test > between a VM and a ceph node reported nearby 10Gb I would expect a > higher rate for the single VM. Even if I run a test with 5 VMs on *one* > physical host (values not shown above), the results are not far behind > the values for 5 VMs on 5 hosts shown above. So the single host seems > not to be the limiting factor, but the VM itself is limiting IO. > > What rates do you find on your proxmox/ceph cluster for single VMs? > Does any one have any explanation for this rather big difference or > perhaps an idea what to try in order to get higher IO-rates from a > single VM? > > Thank you very much in advance > Rainer > > > > --------------------------------------------- > Here are the more detailed test results for anyone interested: > > Using bonnie++: > 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, BTRFS, cd /root; > bonnie++ -u root > Average for each VM: > block write: ~42MByte/sec, block read: ~75MByte/sec > ceph: total peak: 485MByte/sec write, 540MByte/sec read > > 5 VMs (one on each of the 5 hosts) 4GB RAM, BTRFS, cd /root; bonnie++ -u > root > Average for each VM: > block write: ~62MByte/sec, block read: ~90MByte/sec > ceph: total peak: 338MByte/sec write, 310MByte/sec read > > 1 VM 4GB RAM, BTRFS, cd /root; bonnie++ -u root > Average for VM: > block write: ~114 MByte/sec, block read: ~108MByte/sec > ceph: total peak: 120 MByte/sec write, 111MByte/sec read > > > Using dd: > 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based > vm-disk "sdb" (rbd) > write: dd if=/dev/zero of=/dev/sdb bs=nnn count=kkk conv=fsync > status=progress > read: dd of=/dev/null if=/dev/sdb bs=nnn count=kkk status=progress > Average for each VM: > bs=1024k count=12000: dd write: ~58MByte/sec, dd read: ~48MByte/sec > bs=4096k count=3000: dd write: ~59MByte/sec, dd read: ~55MByte/sec > ceph: total peak: 711MByte/sec write, 698 MByte/sec read > > 5 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based > vm-disk "sdb" (rbd) > write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync > status=progress > read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress > Average for each VM: > bs=4096 count=3000: dd write: ~80 MByte/sec, dd read: ~47MByte/sec > ceph: total peak: 421MByte/sec write, 248 MByte/sec read > > 1 VM: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd-device) > write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync > status=progress > read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress > Average for each VM: > bs=4096k count=3000: dd write: ~145 MByte/sec, dd read: ~130 MByte/sec > ceph From a.antreich at proxmox.com Tue Mar 17 15:26:00 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Tue, 17 Mar 2020 15:26:00 +0100 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> Message-ID: <20200317142600.GB416444@dona.proxmox.com> Hallo Rainer, On Tue, Mar 17, 2020 at 02:04:22PM +0100, Rainer Krienke wrote: > Hello, > > I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb > Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic > disks. The pool with rbd images for disk storage is erasure coded with a > 4+2 profile. > > I ran some performance tests since I noticed that there seems to be a > strange limit to the disk read/write rate on a single VM even if the > physical machine hosting the VM as well as cluster is in total capable > of doing much more. > > So what I did was to run a bonnie++ as well as a dd read/write test > first in parallel on 10 VMs, then on 5 VMs and at last on a single one. > > A value of "75" for "bo++rd" in the first line below means that each of > the 10 bonnie++-processes running on 10 different proxmox VMs in > parallel reported in average over all the results a value of > 75MBytes/sec for "block read". The ceph-values are the peaks measured by > ceph itself during the test run (all rd/wr values in MBytes/sec): > > VM-count: bo++rd: bo++wr: ceph(rd/wr): dd-rd: dd-wr: ceph(rd/wr): > 10 75 42 540/485 55 58 698/711 > 5 90 62 310/338 47 80 248/421 > 1 108 114 111/120 130 145 337/165 > > > What I find a little strange is that running many VMs doing IO in > parallel I reach a write rate of about 485-711 MBytes/sec. However when > running a single VM the maximum is at 120-165 MBytes/sec. Since the > whole networking is based on a 10GB infrastructure and an iperf test > between a VM and a ceph node reported nearby 10Gb I would expect a > higher rate for the single VM. Even if I run a test with 5 VMs on *one* > physical host (values not shown above), the results are not far behind > the values for 5 VMs on 5 hosts shown above. So the single host seems > not to be the limiting factor, but the VM itself is limiting IO. > > What rates do you find on your proxmox/ceph cluster for single VMs? > Does any one have any explanation for this rather big difference or > perhaps an idea what to try in order to get higher IO-rates from a > single VM? > > Thank you very much in advance > Rainer As Eneko said, single thread vs multiple threads. How are you VMs configured (qm config )? > > > > --------------------------------------------- > Here are the more detailed test results for anyone interested: > > Using bonnie++: > 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, BTRFS, cd /root; > bonnie++ -u root > Average for each VM: > block write: ~42MByte/sec, block read: ~75MByte/sec > ceph: total peak: 485MByte/sec write, 540MByte/sec read > > 5 VMs (one on each of the 5 hosts) 4GB RAM, BTRFS, cd /root; bonnie++ -u > root > Average for each VM: > block write: ~62MByte/sec, block read: ~90MByte/sec > ceph: total peak: 338MByte/sec write, 310MByte/sec read > > 1 VM 4GB RAM, BTRFS, cd /root; bonnie++ -u root > Average for VM: > block write: ~114 MByte/sec, block read: ~108MByte/sec > ceph: total peak: 120 MByte/sec write, 111MByte/sec read How did you configure bonnie? And a CoW filesystem on top of Ceph will certainly drop performance. > > > Using dd: > 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based > vm-disk "sdb" (rbd) > write: dd if=/dev/zero of=/dev/sdb bs=nnn count=kkk conv=fsync > status=progress > read: dd of=/dev/null if=/dev/sdb bs=nnn count=kkk status=progress > Average for each VM: > bs=1024k count=12000: dd write: ~58MByte/sec, dd read: ~48MByte/sec > bs=4096k count=3000: dd write: ~59MByte/sec, dd read: ~55MByte/sec > ceph: total peak: 711MByte/sec write, 698 MByte/sec read > > 5 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based > vm-disk "sdb" (rbd) > write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync > status=progress > read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress > Average for each VM: > bs=4096 count=3000: dd write: ~80 MByte/sec, dd read: ~47MByte/sec > ceph: total peak: 421MByte/sec write, 248 MByte/sec read > > 1 VM: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd-device) > write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync > status=progress > read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress > Average for each VM: > bs=4096k count=3000: dd write: ~145 MByte/sec, dd read: ~130 MByte/sec > ceph: total peak: 165 MByte/sec write, 337 MByte/sec read dd is not well suited for performance benchmarking. Better use bonnie++ or FIO. The later is good for storage benchmarks in general. As you now have the results from the top-most layer. How do the lower layers perform? Eg. FIO has a build-in rbd engine and is able to talk directly to an rbd image. For Ceph pool performance a rados bench can sure shed some light. -- Cheers, Alwin From krienke at uni-koblenz.de Tue Mar 17 16:58:30 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Tue, 17 Mar 2020 16:58:30 +0100 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> Message-ID: <9aada20e-6cf4-d06b-53a0-452418489713@uni-koblenz.de> Hello, thanks for your answer, if I understand you correctly, than iothreads can only help if the VM has more than one disk, hence your proposal to build a raid0 on two rbd devices. The disadvantage of this solution would of course be that disk usage would be doubled. A fileserver VM I manage (not yet productive) could profit from this. I use LVM on it anyway and I could use striped LVs, so those volumes would read from more vm pv disks. Should help I guess. The hosts CPU is a AMD EPYC 7402 24-Core Processor. Does it make sense to select a specific CPU-type for the VM. My test machines have a default kvm64 processor. The number of processors should then probably be minimal equal to the number of disks (number of iothreads)? Do you know if it makes any difference wheater I use the VirtIO SCSI-driver versus the Virtio-SCSI-single driver? Thank you very much Rainer Am 17.03.20 um 14:10 schrieb Eneko Lacunza: > Hi, > > You can try to enable IO threads and assign multiple Ceph disks to the > VM, then build some kind of raid0 to increase performance. > > Generally speaking, a SSD based Ceph cluster is considered to perform > well when a VM gets about 2000 IOPS, and factors like CPU 1-thread > performance, network and disk have to be selected with care. Also > server's energy saving disabled, etc. > > What CPUs in those 9 nodes? > > Ceph is built for parallel access and scaling. You're only using 1 > thread of your VM for disk IO currently. > > Cheers > Eneko > > El 17/3/20 a las 14:04, Rainer Krienke escribi?: >> Hello, >> >> I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb >> Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic >> disks. The pool with rbd images for disk storage is erasure coded with a >> 4+2 profile. >> >> I ran some performance tests since I noticed that there seems to be a >> strange limit to the disk read/write rate on a single VM even if the >> physical machine hosting the VM as well as cluster is in total capable >> of doing much more. >> >> So what I did was to run a bonnie++ as well as a dd read/write test >> first in parallel on 10 VMs, then on 5 VMs and at last on a single one. >> >> A value of "75" for "bo++rd" in the first line below means that each of >> the 10 bonnie++-processes running on 10 different proxmox VMs in >> parallel reported in average over all the results a value of >> 75MBytes/sec for "block read". The ceph-values are the peaks measured by >> ceph itself during the test run (all rd/wr values in MBytes/sec): >> >> VM-count:? bo++rd: bo++wr: ceph(rd/wr):? dd-rd:? dd-wr:? ceph(rd/wr): >> 10?????????? 75????? 42????? 540/485?????? 55???? 58????? 698/711 >> ? 5?????????? 90????? 62????? 310/338?????? 47???? 80????? 248/421 >> ? 1????????? 108???? 114????? 111/120????? 130??? 145????? 337/165 >> >> >> What I find a little strange is that running many VMs doing IO in >> parallel I reach a write rate of about 485-711 MBytes/sec. However when >> running a single VM the maximum is at 120-165 MBytes/sec. Since the >> whole networking is based on a 10GB infrastructure and an iperf test >> between a VM and a ceph node reported nearby 10Gb I would expect a >> higher rate for the single VM. Even if I run a test with 5 VMs on *one* >> physical host (values not shown above), the results are not far behind >> the values for 5 VMs on 5 hosts shown above. So the single host seems >> not to be the limiting factor, but the VM itself is limiting IO. >> >> What rates do you find on your proxmox/ceph cluster for single VMs? >> Does any one have any explanation for this rather big difference or >> perhaps an idea what to try in order to get higher IO-rates from a >> single VM? >> >> Thank you very much in advance >> Rainer >> >> >> >> --------------------------------------------- >> Here are the more detailed test results for anyone interested: >> >> Using bonnie++: >> 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, BTRFS, cd /root; >> bonnie++ -u root >> ?? Average for each VM: >> ?? block write: ~42MByte/sec, block read: ~75MByte/sec >> ?? ceph: total peak: 485MByte/sec write, 540MByte/sec read >> >> 5 VMs (one on each of the 5 hosts) 4GB RAM, BTRFS, cd /root; bonnie++ -u >> root >> ?? Average for each VM: >> ?? block write: ~62MByte/sec, block read: ~90MByte/sec >> ?? ceph: total peak: 338MByte/sec write, 310MByte/sec read >> >> 1 VM? 4GB RAM, BTRFS, cd /root; bonnie++ -u root >> ?? Average for VM: >> ?? block write: ~114 MByte/sec, block read: ~108MByte/sec >> ?? ceph: total peak: 120 MByte/sec write, 111MByte/sec read >> >> >> Using dd: >> 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based >> vm-disk "sdb" (rbd) >> ?? write: dd if=/dev/zero of=/dev/sdb bs=nnn count=kkk conv=fsync >> status=progress >> ?? read:? dd of=/dev/null if=/dev/sdb bs=nnn count=kkk? status=progress >> ?? Average for each VM: >> ?? bs=1024k count=12000: dd write: ~58MByte/sec, dd read: ~48MByte/sec >> ?? bs=4096k count=3000:? dd write: ~59MByte/sec, dd read: ~55MByte/sec >> ?? ceph: total peak: 711MByte/sec write, 698 MByte/sec read >> >> 5 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based >> vm-disk "sdb" (rbd) >> ?? write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync >> status=progress >> ?? read:? dd of=/dev/null if=/dev/sdb bs=4096k count=3000? >> status=progress >> ?? Average for each VM: >> ?? bs=4096 count=3000:? dd write: ~80 MByte/sec, dd read: ~47MByte/sec >> ?? ceph: total peak: 421MByte/sec write, 248 MByte/sec read >> >> 1 VM: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd-device) >> ?? write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync >> status=progress >> ?? read:? dd of=/dev/null if=/dev/sdb bs=4096k count=3000? >> status=progress >> ?? Average for each VM: >> ?? bs=4096k count=3000:? dd write: ~145 MByte/sec, dd read: ~130 >> MByte/sec >> ?? ceph: total peak: 165 MByte/sec write, 337 MByte/sec read > > -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 From krienke at uni-koblenz.de Tue Mar 17 17:07:47 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Tue, 17 Mar 2020 17:07:47 +0100 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <20200317142600.GB416444@dona.proxmox.com> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <20200317142600.GB416444@dona.proxmox.com> Message-ID: <87391f4d-ecf5-7d27-cc3e-3e6eaed80dc6@uni-koblenz.de> Hello Alwin, thank you for your reply. The test VMs config is this one. It only has the system disk as well a disk I added for my test writing on the device with dd: agent: 1 bootdisk: scsi0 cores: 2 cpu: kvm64 ide2: none,media=cdrom memory: 4096 name: pxaclient1 net0: virtio=52:24:28:e9:18:24,bridge=vmbr1,firewall=1 numa: 0 ostype: l26 scsi0: ceph:vm-100-disk-0,size=32G scsi1: ceph:vm-100-disk-1,size=500G scsihw: virtio-scsi-pci serial0: socket smbios1: uuid=c57eb716-8188-485b-89cb-35d41dbf3fc1 sockets: 2 This is as said only a test machine. As I already wrote to Enko, I have some server VMs where I could parallelize IO by using striped LVs at the moment these LVs are not striped. But of course it would also help if for the long run there was a way to lift the "one" disk IO bottleneck. Thank you very much Rainer Am 17.03.20 um 15:26 schrieb Alwin Antreich: > Hallo Rainer, > > On Tue, Mar 17, 2020 at 02:04:22PM +0100, Rainer Krienke wrote: >> Hello, >> >> I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb >> Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic >> disks. The pool with rbd images for disk storage is erasure coded with a >> 4+2 profile. >> >> I ran some performance tests since I noticed that there seems to be a >> strange limit to the disk read/write rate on a single VM even if the >> physical machine hosting the VM as well as cluster is in total capable >> of doing much more. >> >> So what I did was to run a bonnie++ as well as a dd read/write test >> first in parallel on 10 VMs, then on 5 VMs and at last on a single one. >> >> A value of "75" for "bo++rd" in the first line below means that each of >> the 10 bonnie++-processes running on 10 different proxmox VMs in >> parallel reported in average over all the results a value of >> 75MBytes/sec for "block read". The ceph-values are the peaks measured by >> ceph itself during the test run (all rd/wr values in MBytes/sec): >> >> VM-count: bo++rd: bo++wr: ceph(rd/wr): dd-rd: dd-wr: ceph(rd/wr): >> 10 75 42 540/485 55 58 698/711 >> 5 90 62 310/338 47 80 248/421 >> 1 108 114 111/120 130 145 337/165 >> >> -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 From a.antreich at proxmox.com Tue Mar 17 19:13:04 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Tue, 17 Mar 2020 19:13:04 +0100 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <87391f4d-ecf5-7d27-cc3e-3e6eaed80dc6@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <20200317142600.GB416444@dona.proxmox.com> <87391f4d-ecf5-7d27-cc3e-3e6eaed80dc6@uni-koblenz.de> Message-ID: <20200317181304.GE416444@dona.proxmox.com> On Tue, Mar 17, 2020 at 05:07:47PM +0100, Rainer Krienke wrote: > Hello Alwin, > > thank you for your reply. > > The test VMs config is this one. It only has the system disk as well a > disk I added for my test writing on the device with dd: > > agent: 1 > bootdisk: scsi0 > cores: 2 > cpu: kvm64 If possible, set host as CPU type. Exposes all extension of the CPU model to the VM. But you will need the same CPU model on all the nodes. Otherwise try to find a model with a common set of features. > ide2: none,media=cdrom > memory: 4096 With more memory for the VM, you could also tune the caching inside the VM. > name: pxaclient1 > net0: virtio=52:24:28:e9:18:24,bridge=vmbr1,firewall=1 > numa: 0 > ostype: l26 > scsi0: ceph:vm-100-disk-0,size=32G > scsi1: ceph:vm-100-disk-1,size=500G Use cache=writeback, Qemu caching modes translate to the Ceph cache. With writeback, Ceph activates the librbd caching (default 25 MB). > scsihw: virtio-scsi-pci > serial0: socket > smbios1: uuid=c57eb716-8188-485b-89cb-35d41dbf3fc1 > sockets: 2 If it is a NUMA system, then best activate also the NUMA flag, as KVM tries to run the two threads (cores) on the same node. > > > This is as said only a test machine. As I already wrote to Enko, I have > some server VMs where I could parallelize IO by using striped LVs at the > moment these LVs are not striped. But of course it would also help if > for the long run there was a way to lift the "one" disk IO bottleneck. Yes, I have seen. But this will make backups and managing the disks harder. -- Cheers, Alwin From aderumier at odiso.com Tue Mar 17 19:32:52 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Tue, 17 Mar 2020 19:32:52 +0100 (CET) Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> Message-ID: <180567407.5360701.1584469972478.JavaMail.zimbra@odiso.com> >>What rates do you find on your proxmox/ceph cluster for single VMs? with replicat x3 and 4k block random read/write with big queue depth, I'm around 70000iops read && 40000iops write (by vm disk if iothread is used, the limitation is cpu usage of 1 thread/core by disk) with queue depth=1, I'm around 4000-5000 iops. (because of network latency + cpu latency). This is with client/server with 3ghz intel cpu. ----- Mail original ----- De: "Rainer Krienke" ?: "proxmoxve" Envoy?: Mardi 17 Mars 2020 14:04:22 Objet: [PVE-User] Proxmox with ceph storage VM performance strangeness Hello, I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic disks. The pool with rbd images for disk storage is erasure coded with a 4+2 profile. I ran some performance tests since I noticed that there seems to be a strange limit to the disk read/write rate on a single VM even if the physical machine hosting the VM as well as cluster is in total capable of doing much more. So what I did was to run a bonnie++ as well as a dd read/write test first in parallel on 10 VMs, then on 5 VMs and at last on a single one. A value of "75" for "bo++rd" in the first line below means that each of the 10 bonnie++-processes running on 10 different proxmox VMs in parallel reported in average over all the results a value of 75MBytes/sec for "block read". The ceph-values are the peaks measured by ceph itself during the test run (all rd/wr values in MBytes/sec): VM-count: bo++rd: bo++wr: ceph(rd/wr): dd-rd: dd-wr: ceph(rd/wr): 10 75 42 540/485 55 58 698/711 5 90 62 310/338 47 80 248/421 1 108 114 111/120 130 145 337/165 What I find a little strange is that running many VMs doing IO in parallel I reach a write rate of about 485-711 MBytes/sec. However when running a single VM the maximum is at 120-165 MBytes/sec. Since the whole networking is based on a 10GB infrastructure and an iperf test between a VM and a ceph node reported nearby 10Gb I would expect a higher rate for the single VM. Even if I run a test with 5 VMs on *one* physical host (values not shown above), the results are not far behind the values for 5 VMs on 5 hosts shown above. So the single host seems not to be the limiting factor, but the VM itself is limiting IO. What rates do you find on your proxmox/ceph cluster for single VMs? Does any one have any explanation for this rather big difference or perhaps an idea what to try in order to get higher IO-rates from a single VM? Thank you very much in advance Rainer --------------------------------------------- Here are the more detailed test results for anyone interested: Using bonnie++: 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, BTRFS, cd /root; bonnie++ -u root Average for each VM: block write: ~42MByte/sec, block read: ~75MByte/sec ceph: total peak: 485MByte/sec write, 540MByte/sec read 5 VMs (one on each of the 5 hosts) 4GB RAM, BTRFS, cd /root; bonnie++ -u root Average for each VM: block write: ~62MByte/sec, block read: ~90MByte/sec ceph: total peak: 338MByte/sec write, 310MByte/sec read 1 VM 4GB RAM, BTRFS, cd /root; bonnie++ -u root Average for VM: block write: ~114 MByte/sec, block read: ~108MByte/sec ceph: total peak: 120 MByte/sec write, 111MByte/sec read Using dd: 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd) write: dd if=/dev/zero of=/dev/sdb bs=nnn count=kkk conv=fsync status=progress read: dd of=/dev/null if=/dev/sdb bs=nnn count=kkk status=progress Average for each VM: bs=1024k count=12000: dd write: ~58MByte/sec, dd read: ~48MByte/sec bs=4096k count=3000: dd write: ~59MByte/sec, dd read: ~55MByte/sec ceph: total peak: 711MByte/sec write, 698 MByte/sec read 5 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd) write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync status=progress read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress Average for each VM: bs=4096 count=3000: dd write: ~80 MByte/sec, dd read: ~47MByte/sec ceph: total peak: 421MByte/sec write, 248 MByte/sec read 1 VM: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd-device) write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync status=progress read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress Average for each VM: bs=4096k count=3000: dd write: ~145 MByte/sec, dd read: ~130 MByte/sec ceph: total peak: 165 MByte/sec write, 337 MByte/sec read -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From elacunza at binovo.es Wed Mar 18 08:47:55 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 18 Mar 2020 08:47:55 +0100 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <9aada20e-6cf4-d06b-53a0-452418489713@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> <9aada20e-6cf4-d06b-53a0-452418489713@uni-koblenz.de> Message-ID: Hi Rainer, El 17/3/20 a las 16:58, Rainer Krienke escribi?: > thanks for your answer, Take into account I haven't used iothreads, what I told you is what I learned here and elsewhere. Alexandre and Alwin are experts in this instead ;) > if I understand you correctly, than iothreads can only help if the VM > has more than one disk, hence your proposal to build a raid0 on two rbd > devices. The disadvantage of this solution would of course be that disk > usage would be doubled. Not necesarilly, just create more, smaller disks. Create a stripped raid0 and add it as PV to LVM, then create the LVs you need. Alwin is right that this will make disk management more complex... > A fileserver VM I manage (not yet productive) could profit from this. I > use LVM on it anyway and I could use striped LVs, so those volumes would > read from more vm pv disks. Should help I guess. > > The hosts CPU is a AMD EPYC 7402 24-Core Processor. Does it make sense > to select a specific CPU-type for the VM. My test machines have a > default kvm64 processor. The number of processors should then probably > be minimal equal to the number of disks (number of iothreads)? If all hosts have the same CPU, then use "host" type CPU. > Do you know if it makes any difference wheater I use the VirtIO > SCSI-driver versus the Virtio-SCSI-single driver? I haven't tried -single, maybe others can comment on this. Cheers Eneko > > Thank you very much > Rainer > > Am 17.03.20 um 14:10 schrieb Eneko Lacunza: >> Hi, >> >> You can try to enable IO threads and assign multiple Ceph disks to the >> VM, then build some kind of raid0 to increase performance. >> >> Generally speaking, a SSD based Ceph cluster is considered to perform >> well when a VM gets about 2000 IOPS, and factors like CPU 1-thread >> performance, network and disk have to be selected with care. Also >> server's energy saving disabled, etc. >> >> What CPUs in those 9 nodes? >> >> Ceph is built for parallel access and scaling. You're only using 1 >> thread of your VM for disk IO currently. >> >> Cheers >> Eneko >> >> El 17/3/20 a las 14:04, Rainer Krienke escribi?: >>> Hello, >>> >>> I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb >>> Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic >>> disks. The pool with rbd images for disk storage is erasure coded with a >>> 4+2 profile. >>> >>> I ran some performance tests since I noticed that there seems to be a >>> strange limit to the disk read/write rate on a single VM even if the >>> physical machine hosting the VM as well as cluster is in total capable >>> of doing much more. >>> >>> So what I did was to run a bonnie++ as well as a dd read/write test >>> first in parallel on 10 VMs, then on 5 VMs and at last on a single one. >>> >>> A value of "75" for "bo++rd" in the first line below means that each of >>> the 10 bonnie++-processes running on 10 different proxmox VMs in >>> parallel reported in average over all the results a value of >>> 75MBytes/sec for "block read". The ceph-values are the peaks measured by >>> ceph itself during the test run (all rd/wr values in MBytes/sec): >>> >>> VM-count:? bo++rd: bo++wr: ceph(rd/wr):? dd-rd:? dd-wr:? ceph(rd/wr): >>> 10?????????? 75????? 42????? 540/485?????? 55???? 58????? 698/711 >>> ? 5?????????? 90????? 62????? 310/338?????? 47???? 80????? 248/421 >>> ? 1????????? 108???? 114????? 111/120????? 130??? 145????? 337/165 >>> >>> >>> What I find a little strange is that running many VMs doing IO in >>> parallel I reach a write rate of about 485-711 MBytes/sec. However when >>> running a single VM the maximum is at 120-165 MBytes/sec. Since the >>> whole networking is based on a 10GB infrastructure and an iperf test >>> between a VM and a ceph node reported nearby 10Gb I would expect a >>> higher rate for the single VM. Even if I run a test with 5 VMs on *one* >>> physical host (values not shown above), the results are not far behind >>> the values for 5 VMs on 5 hosts shown above. So the single host seems >>> not to be the limiting factor, but the VM itself is limiting IO. >>> >>> What rates do you find on your proxmox/ceph cluster for single VMs? >>> Does any one have any explanation for this rather big difference or >>> perhaps an idea what to try in order to get higher IO-rates from a >>> single VM? >>> >>> Thank you very much in advance >>> Rainer >>> >>> >>> >>> --------------------------------------------- >>> Here are the more detailed test results for anyone interested: >>> >>> Using bonnie++: >>> 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, BTRFS, cd /root; >>> bonnie++ -u root >>> ?? Average for each VM: >>> ?? block write: ~42MByte/sec, block read: ~75MByte/sec >>> ?? ceph: total peak: 485MByte/sec write, 540MByte/sec read >>> >>> 5 VMs (one on each of the 5 hosts) 4GB RAM, BTRFS, cd /root; bonnie++ -u >>> root >>> ?? Average for each VM: >>> ?? block write: ~62MByte/sec, block read: ~90MByte/sec >>> ?? ceph: total peak: 338MByte/sec write, 310MByte/sec read >>> >>> 1 VM? 4GB RAM, BTRFS, cd /root; bonnie++ -u root >>> ?? Average for VM: >>> ?? block write: ~114 MByte/sec, block read: ~108MByte/sec >>> ?? ceph: total peak: 120 MByte/sec write, 111MByte/sec read >>> >>> >>> Using dd: >>> 10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based >>> vm-disk "sdb" (rbd) >>> ?? write: dd if=/dev/zero of=/dev/sdb bs=nnn count=kkk conv=fsync >>> status=progress >>> ?? read:? dd of=/dev/null if=/dev/sdb bs=nnn count=kkk? status=progress >>> ?? Average for each VM: >>> ?? bs=1024k count=12000: dd write: ~58MByte/sec, dd read: ~48MByte/sec >>> ?? bs=4096k count=3000:? dd write: ~59MByte/sec, dd read: ~55MByte/sec >>> ?? ceph: total peak: 711MByte/sec write, 698 MByte/sec read >>> >>> 5 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based >>> vm-disk "sdb" (rbd) >>> ?? write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync >>> status=progress >>> ?? read:? dd of=/dev/null if=/dev/sdb bs=4096k count=3000 >>> status=progress >>> ?? Average for each VM: >>> ?? bs=4096 count=3000:? dd write: ~80 MByte/sec, dd read: ~47MByte/sec >>> ?? ceph: total peak: 421MByte/sec write, 248 MByte/sec read >>> >>> 1 VM: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd-device) >>> ?? write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync >>> status=progress >>> ?? read:? dd of=/dev/null if=/dev/sdb bs=4096k count=3000 >>> status=progress >>> ?? Average for each VM: >>> ?? bs=4096k count=3000:? dd write: ~145 MByte/sec, dd read: ~130 >>> MByte/sec >>> ?? ceph: total peak: 165 MByte/sec write, 337 MByte/sec read >> -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From mark at openvs.co.uk Fri Mar 20 12:46:41 2020 From: mark at openvs.co.uk (Mark Adams) Date: Fri, 20 Mar 2020 11:46:41 +0000 Subject: VM Start timeout with pci passthrough Message-ID: Hi All, I am having the issue that is detailed in this forum post: https://forum.proxmox.com/threads/vm-start-timeout-with-pci-gpu.45843/ I thought I would take it to the mailing list to see if anyone here has any ideas? VM's boot fine the first time the machine starts up, and even a few reboots work fine - but eventually the timeout start happening. If you start the vm manually it works fine. Also if you lower the ram allocation (for instance from 64 -> 32) then it also seems to boot fine. Any thoughts on this very welcome! Regards, Mark From leesteken at pm.me Fri Mar 20 15:40:42 2020 From: leesteken at pm.me (leesteken at pm.me) Date: Fri, 20 Mar 2020 14:40:42 +0000 Subject: [PVE-User] VM Start timeout with pci passthrough In-Reply-To: References: Message-ID: <9Df5IZUDoG6h3SYlJ23tNuAC8Q4tAzj0bwJSBfbr91LuG2RN_rVMsrmP1p9wbip03GJZ3RHpXFBXP-ymNhextw4BD3OXM1qJJuooJmUO2fE=@pm.me> ??????? Original Message ??????? On Friday, March 20, 2020 12:46 PM, Mark Adams via pve-user wrote: > Hi All, > > I am having the issue that is detailed in this forum post: > > https://forum.proxmox.com/threads/vm-start-timeout-with-pci-gpu.45843/ > > I thought I would take it to the mailing list to see if anyone here has any > ideas? > > VM's boot fine the first time the machine starts up, and even a few reboots > work fine - but eventually the timeout start happening. > > If you start the vm manually it works fine. > > Also if you lower the ram allocation (for instance from 64 -> 32) then it > also seems to boot fine. > > Any thoughts on this very welcome! When doing PCI passthrough, KVM needs to allocate and pin (and zero out) all of the memory, because of possible DMA from PCI devices. This used to take quite some time (minutes), and although it has been improved, maybe it still does. If you reduce the RAM of the VM, does it also start quicker? If you use (1GB) hugepages, does the VM start quicker? In my case, booting a VM with 32GB on a AMD Zen+ machine only takes seconds. Does your system have multiple NUMA nodes or sockets? Maybe, if the amount is more than a single node, it takes longer? What platform are you using? Maybe the Proxmox developers can tell you where the time-out is specified, or make the time-out dependent on the amount of RAM (if that is the cause)? kind regards, Arjen From elacunza at binovo.es Tue Mar 24 10:34:15 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Tue, 24 Mar 2020 10:34:15 +0100 Subject: [PVE-User] Spillover issue Message-ID: <17dfcaaa-1054-6d10-ec3d-da4567a4ae41@binovo.es> Hi all, We're seeing a spillover issue with Ceph, using 14.2.8: We originally had 1GB rocks.db partition: 1. ceph health detail HEALTH_WARN BlueFS spillover detected on 3 OSD BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD osd.3 spilled over 78 MiB metadata from 'db' device (1024 MiB used of 1024 MiB) to slow device osd.4 spilled over 78 MiB metadata from 'db' device (1024 MiB used of 1024 MiB) to slow device osd.5 spilled over 84 MiB metadata from 'db' device (1024 MiB used of 1024 MiB) to slow device We have created new 6GiB partitions for rocks.db, copied the original partition, then extended it with "ceph-bluestore-tool bluefs-bdev-expand". Now we get: 1. ceph health detail HEALTH_WARN BlueFS spillover detected on 3 OSD BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD osd.3 spilled over 5 MiB metadata from 'db' device (555 MiB used of 6.0 GiB) to slow device osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of 6.0 GiB) to slow device osd.5 spilled over 5 MiB metadata from 'db' device (561 MiB used of 6.0 GiB) to slow device Issuing "ceph daemon osd.X compact" doesn't help, but shows the following transitional state: 1. ceph daemon osd.5 compact { "elapsed_time": 5.4560688339999999 } 2. ceph health detail HEALTH_WARN BlueFS spillover detected on 3 OSD BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD osd.3 spilled over 5 MiB metadata from 'db' device (556 MiB used of 6.0 GiB) to slow device osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of 6.0 GiB) to slow device osd.5 spilled over 5 MiB metadata from 'db' device (1.1 GiB used of 6.0 GiB) to slow device (...and after a while...) 3. ceph health detail HEALTH_WARN BlueFS spillover detected on 3 OSD BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD osd.3 spilled over 5 MiB metadata from 'db' device (556 MiB used of 6.0 GiB) to slow device osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of 6.0 GiB) to slow device osd.5 spilled over 5 MiB metadata from 'db' device (551 MiB used of 6.0 GiB) to slow device I may be overlooking something, any idea? Just found also the following ceph issue: https://tracker.ceph.com/issues/38745 5MiB of metadata in slow isn't a big problem, but cluster is permanently in health Warning state... :) # pveversion -v proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve) pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e) pve-kernel-helper: 6.1-7 pve-kernel-5.3: 6.1-5 pve-kernel-4.15: 5.4-14 pve-kernel-5.3.18-2-pve: 5.3.18-2 pve-kernel-4.15.18-26-pve: 4.15.18-54 pve-kernel-4.15.18-25-pve: 4.15.18-53 pve-kernel-4.15.18-12-pve: 4.15.18-36 pve-kernel-4.15.18-2-pve: 4.15.18-21 pve-kernel-4.13.13-6-pve: 4.13.13-42 pve-kernel-4.13.13-2-pve: 4.13.13-33 ceph: 14.2.8-pve1 ceph-fuse: 14.2.8-pve1 corosync: 3.0.3-pve1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35+pve1 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.15-pve1 libpve-access-control: 6.0-6 libpve-apiclient-perl: 3.0-3 libpve-common-perl: 6.0-12 libpve-guest-common-perl: 3.0-3 libpve-http-server-perl: 3.0-4 libpve-storage-perl: 6.1-4 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 3.2.1-1 lxcfs: 3.0.3-pve60 novnc-pve: 1.1.0-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.1-3 pve-cluster: 6.1-4 pve-container: 3.0-19 pve-docs: 6.1-6 pve-edk2-firmware: 2.20200229-1 pve-firewall: 4.0-10 pve-firmware: 3.0-6 pve-ha-manager: 3.0-8 pve-i18n: 2.0-4 pve-qemu-kvm: 4.1.1-3 pve-xtermjs: 4.3.0-1 qemu-server: 6.1-6 smartmontools: 7.1-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.3-pve1 Thanks a lot Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From a.antreich at proxmox.com Tue Mar 24 12:24:23 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Tue, 24 Mar 2020 12:24:23 +0100 Subject: [PVE-User] Spillover issue In-Reply-To: <17dfcaaa-1054-6d10-ec3d-da4567a4ae41@binovo.es> References: <17dfcaaa-1054-6d10-ec3d-da4567a4ae41@binovo.es> Message-ID: <20200324112423.GI46219@dona.proxmox.com> Hello Eneko, On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote: > Hi all, > > We're seeing a spillover issue with Ceph, using 14.2.8: > > We originally had 1GB rocks.db partition: > > 1. ceph health detail > HEALTH_WARN BlueFS spillover detected on 3 OSD > BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD > osd.3 spilled over 78 MiB metadata from 'db' device (1024 MiB used > of 1024 MiB) to slow device > osd.4 spilled over 78 MiB metadata from 'db' device (1024 MiB used > of 1024 MiB) to slow device > osd.5 spilled over 84 MiB metadata from 'db' device (1024 MiB used > of 1024 MiB) to slow device > > We have created new 6GiB partitions for rocks.db, copied the original > partition, then extended it with "ceph-bluestore-tool bluefs-bdev-expand". > Now we get: > > 1. ceph health detail > HEALTH_WARN BlueFS spillover detected on 3 OSD > BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD > osd.3 spilled over 5 MiB metadata from 'db' device (555 MiB used of > 6.0 GiB) to slow device > osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of > 6.0 GiB) to slow device > osd.5 spilled over 5 MiB metadata from 'db' device (561 MiB used of > 6.0 GiB) to slow device > > Issuing "ceph daemon osd.X compact" doesn't help, but shows the following > transitional state: > > 1. ceph daemon osd.5 compact { > "elapsed_time": 5.4560688339999999 > } > 2. ceph health detail > HEALTH_WARN BlueFS spillover detected on 3 OSD > BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD > osd.3 spilled over 5 MiB metadata from 'db' device (556 MiB used of > 6.0 GiB) to slow device > osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of > 6.0 GiB) to slow device > osd.5 spilled over 5 MiB metadata from 'db' device (1.1 GiB used of > 6.0 GiB) to slow device > (...and after a while...) > 3. ceph health detail > HEALTH_WARN BlueFS spillover detected on 3 OSD > BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD > osd.3 spilled over 5 MiB metadata from 'db' device (556 MiB used of > 6.0 GiB) to slow device > osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of > 6.0 GiB) to slow device > osd.5 spilled over 5 MiB metadata from 'db' device (551 MiB used of > 6.0 GiB) to slow device > > I may be overlooking something, any idea? Just found also the following ceph > issue: > > https://tracker.ceph.com/issues/38745 > > 5MiB of metadata in slow isn't a big problem, but cluster is permanently in > health Warning state... :) The DB/WAL device is to small and all the new metadata has to be written to the slow device. This will destroy performance. I think the size changes, as the DB gets compacted. The easiest way ist to destroy and re-create the OSD with a bigger DB/WAL. The guideline from Facebook for RocksDB is 3/30/300 GB. -- Cheers, Alwin From elacunza at binovo.es Tue Mar 24 13:12:03 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Tue, 24 Mar 2020 13:12:03 +0100 Subject: [PVE-User] Spillover issue In-Reply-To: <20200324112423.GI46219@dona.proxmox.com> References: <17dfcaaa-1054-6d10-ec3d-da4567a4ae41@binovo.es> <20200324112423.GI46219@dona.proxmox.com> Message-ID: <7889deca-f7af-2d89-8a97-d05c67a69d5b@binovo.es> Hi Allwin, El 24/3/20 a las 12:24, Alwin Antreich escribi?: > On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote: >> We're seeing a spillover issue with Ceph, using 14.2.8: [...] >> 3. ceph health detail >> HEALTH_WARN BlueFS spillover detected on 3 OSD >> BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD >> osd.3 spilled over 5 MiB metadata from 'db' device (556 MiB used of >> 6.0 GiB) to slow device >> osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of >> 6.0 GiB) to slow device >> osd.5 spilled over 5 MiB metadata from 'db' device (551 MiB used of >> 6.0 GiB) to slow device >> >> I may be overlooking something, any idea? Just found also the following ceph >> issue: >> >> https://tracker.ceph.com/issues/38745 >> >> 5MiB of metadata in slow isn't a big problem, but cluster is permanently in >> health Warning state... :) > The DB/WAL device is to small and all the new metadata has to be written > to the slow device. This will destroy performance. > > I think the size changes, as the DB gets compacted. Yes. But it isn't too small... it's 6 GiB and there's only ~560MiB of data. > > The easiest way ist to destroy and re-create the OSD with a bigger > DB/WAL. The guideline from Facebook for RocksDB is 3/30/300 GB. It's well below the 3GiB limit in the guideline ;) Thanks a lot Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From silencly07 at gmail.com Tue Mar 24 13:14:24 2020 From: silencly07 at gmail.com (petrus) Date: Tue, 24 Mar 2020 20:14:24 +0800 Subject: [PVE-User] AMD integrated graphics passthrough Message-ID: Hi, My hardware is AMD Ryzen? 5 3400G with Radeon? RX Vega 11 Graphics, and I only have this one GPU. Can I passthrough this vega graphics to VM? I did a lot of search, but can't find an explicit answer. Any help? Thanks in advance. From gilberto.nunes32 at gmail.com Tue Mar 24 13:26:40 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Tue, 24 Mar 2020 09:26:40 -0300 Subject: [PVE-User] AMD integrated graphics passthrough In-Reply-To: References: Message-ID: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough --- Gilberto Nunes Ferreira Em ter., 24 de mar. de 2020 ?s 09:15, petrus escreveu: > Hi, > My hardware is AMD Ryzen? 5 3400G with Radeon? RX Vega 11 Graphics, and I > only have this one GPU. > Can I passthrough this vega graphics to VM? I did a lot of search, but > can't find an explicit answer. > > Any help? Thanks in advance. > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From silencly07 at gmail.com Tue Mar 24 13:38:33 2020 From: silencly07 at gmail.com (petrus) Date: Tue, 24 Mar 2020 20:38:33 +0800 Subject: [PVE-User] AMD integrated graphics passthrough In-Reply-To: References: Message-ID: > https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough Thank you, it's a very usefully guide. But it seems not mention that whether can I passthrough my only GPU. So, I guess you means the answer is Yes? Gilberto Nunes ?2020?3?24??? ??8:28??? > https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough > --- > Gilberto Nunes Ferreira > > > > > > Em ter., 24 de mar. de 2020 ?s 09:15, petrus > escreveu: > > > Hi, > > My hardware is AMD Ryzen? 5 3400G with Radeon? RX Vega 11 Graphics, and I > > only have this one GPU. > > Can I passthrough this vega graphics to VM? I did a lot of search, but > > can't find an explicit answer. > > > > Any help? Thanks in advance. > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From gilberto.nunes32 at gmail.com Tue Mar 24 13:53:57 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Tue, 24 Mar 2020 09:53:57 -0300 Subject: [PVE-User] AMD integrated graphics passthrough In-Reply-To: References: Message-ID: I guess qm set VMID -hostpci0 00:02.0 You'll need figure out what host id has you GPU device... Follow the guide --- Gilberto Nunes Ferreira Em ter., 24 de mar. de 2020 ?s 09:39, petrus escreveu: > > https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough > > Thank you, it's a very usefully guide. > But it seems not mention that whether can I passthrough my only GPU. So, I > guess you means the answer is Yes? > > Gilberto Nunes ?2020?3?24??? ??8:28??? > > > https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough > > --- > > Gilberto Nunes Ferreira > > > > > > > > > > > > Em ter., 24 de mar. de 2020 ?s 09:15, petrus > > escreveu: > > > > > Hi, > > > My hardware is AMD Ryzen? 5 3400G with Radeon? RX Vega 11 Graphics, > and I > > > only have this one GPU. > > > Can I passthrough this vega graphics to VM? I did a lot of search, but > > > can't find an explicit answer. > > > > > > Any help? Thanks in advance. > > > _______________________________________________ > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From silencly07 at gmail.com Tue Mar 24 14:09:10 2020 From: silencly07 at gmail.com (petrus) Date: Tue, 24 Mar 2020 21:09:10 +0800 Subject: [PVE-User] AMD integrated graphics passthrough In-Reply-To: References: Message-ID: Great, thanks again. Gilberto Nunes ?2020?3?24??? ??8:55??? > I guess > > qm set VMID -hostpci0 00:02.0 > > You'll need figure out what host id has you GPU device... Follow the guide > > --- > Gilberto Nunes Ferreira > > > > > > > Em ter., 24 de mar. de 2020 ?s 09:39, petrus > escreveu: > > > > > https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough > > > > Thank you, it's a very usefully guide. > > But it seems not mention that whether can I passthrough my only GPU. So, > I > > guess you means the answer is Yes? > > > > Gilberto Nunes ?2020?3?24??? ??8:28??? > > > > > > https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough > > > --- > > > Gilberto Nunes Ferreira > > > > > > > > > > > > > > > > > > Em ter., 24 de mar. de 2020 ?s 09:15, petrus > > > escreveu: > > > > > > > Hi, > > > > My hardware is AMD Ryzen? 5 3400G with Radeon? RX Vega 11 Graphics, > > and I > > > > only have this one GPU. > > > > Can I passthrough this vega graphics to VM? I did a lot of search, > but > > > > can't find an explicit answer. > > > > > > > > Any help? Thanks in advance. > > > > _______________________________________________ > > > > pve-user mailing list > > > > pve-user at pve.proxmox.com > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > _______________________________________________ > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From a.antreich at proxmox.com Tue Mar 24 14:54:04 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Tue, 24 Mar 2020 14:54:04 +0100 Subject: [PVE-User] Spillover issue In-Reply-To: <7889deca-f7af-2d89-8a97-d05c67a69d5b@binovo.es> References: <17dfcaaa-1054-6d10-ec3d-da4567a4ae41@binovo.es> <20200324112423.GI46219@dona.proxmox.com> <7889deca-f7af-2d89-8a97-d05c67a69d5b@binovo.es> Message-ID: <20200324135404.GJ46219@dona.proxmox.com> On Tue, Mar 24, 2020 at 01:12:03PM +0100, Eneko Lacunza wrote: > Hi Allwin, > > El 24/3/20 a las 12:24, Alwin Antreich escribi?: > > On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote: > > > We're seeing a spillover issue with Ceph, using 14.2.8: > [...] > > > 3. ceph health detail > > > HEALTH_WARN BlueFS spillover detected on 3 OSD > > > BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD > > > osd.3 spilled over 5 MiB metadata from 'db' device (556 MiB used of > > > 6.0 GiB) to slow device > > > osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of > > > 6.0 GiB) to slow device > > > osd.5 spilled over 5 MiB metadata from 'db' device (551 MiB used of > > > 6.0 GiB) to slow device > > > > > > I may be overlooking something, any idea? Just found also the following ceph > > > issue: > > > > > > https://tracker.ceph.com/issues/38745 > > > > > > 5MiB of metadata in slow isn't a big problem, but cluster is permanently in > > > health Warning state... :) > > The DB/WAL device is to small and all the new metadata has to be written > > to the slow device. This will destroy performance. > > > > I think the size changes, as the DB gets compacted. > Yes. But it isn't too small... it's 6 GiB and there's only ~560MiB of data. Yes true. I meant the used of size. But the message is oddly. You should find the compaction stats in the OSD log files. It could be, as in the bug tracker reasoned, that the compaction needs to much space and spills over to the slow device. Addionally, if no set extra, the WAL will take up 512 MB on the DB device. If the above doesn't give any information then you may need to export the bluefs (RocksDB). Then you can run the kvstore-tool on it. > > > > The easiest way ist to destroy and re-create the OSD with a bigger > > DB/WAL. The guideline from Facebook for RocksDB is 3/30/300 GB. > > It's well below the 3GiB limit in the guideline ;) For now. ;) -- Cheers, Alwin From ml+pve-user at valo.at Tue Mar 24 18:37:40 2020 From: ml+pve-user at valo.at (Christian Kivalo) Date: Tue, 24 Mar 2020 18:37:40 +0100 Subject: [PVE-User] AMD integrated graphics passthrough In-Reply-To: References: Message-ID: On March 24, 2020 2:09:10 PM GMT+01:00, petrus wrote: >Great, thanks again. > You can not pass through the boot gpu. You need a second gpu to pass through. >Gilberto Nunes ?2020?3?24??? ??8:55??? > >> I guess >> >> qm set VMID -hostpci0 00:02.0 >> >> You'll need figure out what host id has you GPU device... Follow the >guide >> >> --- >> Gilberto Nunes Ferreira >> >> >> >> >> >> >> Em ter., 24 de mar. de 2020 ?s 09:39, petrus >> escreveu: >> >> > > >> >https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough >> > >> > Thank you, it's a very usefully guide. >> > But it seems not mention that whether can I passthrough my only >GPU. So, >> I >> > guess you means the answer is Yes? >> > >> > Gilberto Nunes ?2020?3?24??? ??8:28??? >> > >> > > >> >https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough >> > > --- >> > > Gilberto Nunes Ferreira >> > > >> > > >> > > >> > > >> > > >> > > Em ter., 24 de mar. de 2020 ?s 09:15, petrus > >> > > escreveu: >> > > >> > > > Hi, >> > > > My hardware is AMD Ryzen? 5 3400G with Radeon? RX Vega 11 >Graphics, >> > and I >> > > > only have this one GPU. >> > > > Can I passthrough this vega graphics to VM? I did a lot of >search, >> but >> > > > can't find an explicit answer. >> > > > >> > > > Any help? Thanks in advance. >> > > > _______________________________________________ >> > > > pve-user mailing list >> > > > pve-user at pve.proxmox.com >> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > > >> > > _______________________________________________ >> > > pve-user mailing list >> > > pve-user at pve.proxmox.com >> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > >> > _______________________________________________ >> > pve-user mailing list >> > pve-user at pve.proxmox.com >> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> >_______________________________________________ >pve-user mailing list >pve-user at pve.proxmox.com >https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Christian Kivalo From laurentfdumont at gmail.com Wed Mar 25 03:07:48 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 24 Mar 2020 22:07:48 -0400 Subject: [PVE-User] AMD integrated graphics passthrough In-Reply-To: References: Message-ID: What do we mean by boot GPU? A GPU that is is mapped to physical host itself? On Tue, Mar 24, 2020 at 1:37 PM Christian Kivalo wrote: > > > On March 24, 2020 2:09:10 PM GMT+01:00, petrus > wrote: > >Great, thanks again. > > > You can not pass through the boot gpu. You need a second gpu to pass > through. > >Gilberto Nunes ?2020?3?24??? ??8:55??? > > > >> I guess > >> > >> qm set VMID -hostpci0 00:02.0 > >> > >> You'll need figure out what host id has you GPU device... Follow the > >guide > >> > >> --- > >> Gilberto Nunes Ferreira > >> > >> > >> > >> > >> > >> > >> Em ter., 24 de mar. de 2020 ?s 09:39, petrus > >> escreveu: > >> > >> > > > >> > >https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough > >> > > >> > Thank you, it's a very usefully guide. > >> > But it seems not mention that whether can I passthrough my only > >GPU. So, > >> I > >> > guess you means the answer is Yes? > >> > > >> > Gilberto Nunes ?2020?3?24??? ??8:28??? > >> > > >> > > > >> > >https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough > >> > > --- > >> > > Gilberto Nunes Ferreira > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > Em ter., 24 de mar. de 2020 ?s 09:15, petrus > > > >> > > escreveu: > >> > > > >> > > > Hi, > >> > > > My hardware is AMD Ryzen? 5 3400G with Radeon? RX Vega 11 > >Graphics, > >> > and I > >> > > > only have this one GPU. > >> > > > Can I passthrough this vega graphics to VM? I did a lot of > >search, > >> but > >> > > > can't find an explicit answer. > >> > > > > >> > > > Any help? Thanks in advance. > >> > > > _______________________________________________ > >> > > > pve-user mailing list > >> > > > pve-user at pve.proxmox.com > >> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > > > >> > > _______________________________________________ > >> > > pve-user mailing list > >> > > pve-user at pve.proxmox.com > >> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > > >> > _______________________________________________ > >> > pve-user mailing list > >> > pve-user at pve.proxmox.com > >> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > >> _______________________________________________ > >> pve-user mailing list > >> pve-user at pve.proxmox.com > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > >_______________________________________________ > >pve-user mailing list > >pve-user at pve.proxmox.com > >https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > -- > Christian Kivalo > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From silencly07 at gmail.com Wed Mar 25 06:40:23 2020 From: silencly07 at gmail.com (petrus) Date: Wed, 25 Mar 2020 13:40:23 +0800 Subject: [PVE-User] AMD integrated graphics passthrough In-Reply-To: References: Message-ID: > You can not pass through the boot gpu. You need a second gpu to pass through. Hi, I saw an article here https://lunar.computer/posts/gpu-passthrough-proxmox-60/, it says he pass through a integrated Intel GPU to VM, the hardware is NUC NUC7i5BNH. I checked this model of NUC on intel website, https://ark.intel.com/content/www/us/en/ark/products/95067/intel-nuc-kit-nuc7i5bnh.html, it only has one integrated graphics. So, according to this article, it can be pass through? Christian Kivalo ?2020?3?25??? ??1:37??? > > > On March 24, 2020 2:09:10 PM GMT+01:00, petrus > wrote: > >Great, thanks again. > > > You can not pass through the boot gpu. You need a second gpu to pass > through. > >Gilberto Nunes ?2020?3?24??? ??8:55??? > > > >> I guess > >> > >> qm set VMID -hostpci0 00:02.0 > >> > >> You'll need figure out what host id has you GPU device... Follow the > >guide > >> > >> --- > >> Gilberto Nunes Ferreira > >> > >> > >> > >> > >> > >> > >> Em ter., 24 de mar. de 2020 ?s 09:39, petrus > >> escreveu: > >> > >> > > > >> > >https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough > >> > > >> > Thank you, it's a very usefully guide. > >> > But it seems not mention that whether can I passthrough my only > >GPU. So, > >> I > >> > guess you means the answer is Yes? > >> > > >> > Gilberto Nunes ?2020?3?24??? ??8:28??? > >> > > >> > > > >> > >https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough > >> > > --- > >> > > Gilberto Nunes Ferreira > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > Em ter., 24 de mar. de 2020 ?s 09:15, petrus > > > >> > > escreveu: > >> > > > >> > > > Hi, > >> > > > My hardware is AMD Ryzen? 5 3400G with Radeon? RX Vega 11 > >Graphics, > >> > and I > >> > > > only have this one GPU. > >> > > > Can I passthrough this vega graphics to VM? I did a lot of > >search, > >> but > >> > > > can't find an explicit answer. > >> > > > > >> > > > Any help? Thanks in advance. > >> > > > _______________________________________________ > >> > > > pve-user mailing list > >> > > > pve-user at pve.proxmox.com > >> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > > > >> > > _______________________________________________ > >> > > pve-user mailing list > >> > > pve-user at pve.proxmox.com > >> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > > >> > _______________________________________________ > >> > pve-user mailing list > >> > pve-user at pve.proxmox.com > >> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > > >> _______________________________________________ > >> pve-user mailing list > >> pve-user at pve.proxmox.com > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >> > >_______________________________________________ > >pve-user mailing list > >pve-user at pve.proxmox.com > >https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > -- > Christian Kivalo > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From elacunza at binovo.es Wed Mar 25 08:43:41 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 25 Mar 2020 08:43:41 +0100 Subject: [PVE-User] Spillover issue In-Reply-To: <20200324135404.GJ46219@dona.proxmox.com> References: <17dfcaaa-1054-6d10-ec3d-da4567a4ae41@binovo.es> <20200324112423.GI46219@dona.proxmox.com> <7889deca-f7af-2d89-8a97-d05c67a69d5b@binovo.es> <20200324135404.GJ46219@dona.proxmox.com> Message-ID: <6104c080-a479-1a17-78d7-ffd232b20aed@binovo.es> Hi Alwin, El 24/3/20 a las 14:54, Alwin Antreich escribi?: > On Tue, Mar 24, 2020 at 01:12:03PM +0100, Eneko Lacunza wrote: >> Hi Allwin, >> >> El 24/3/20 a las 12:24, Alwin Antreich escribi?: >>> On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote: >>>> We're seeing a spillover issue with Ceph, using 14.2.8: >> [...] >>>> 3. ceph health detail >>>> HEALTH_WARN BlueFS spillover detected on 3 OSD >>>> BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD >>>> osd.3 spilled over 5 MiB metadata from 'db' device (556 MiB used of >>>> 6.0 GiB) to slow device >>>> osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of >>>> 6.0 GiB) to slow device >>>> osd.5 spilled over 5 MiB metadata from 'db' device (551 MiB used of >>>> 6.0 GiB) to slow device >>>> >>>> I may be overlooking something, any idea? Just found also the following ceph >>>> issue: >>>> >>>> https://tracker.ceph.com/issues/38745 >>>> >>>> 5MiB of metadata in slow isn't a big problem, but cluster is permanently in >>>> health Warning state... :) >>> The DB/WAL device is to small and all the new metadata has to be written >>> to the slow device. This will destroy performance. >>> >>> I think the size changes, as the DB gets compacted. >> Yes. But it isn't too small... it's 6 GiB and there's only ~560MiB of data. > Yes true. I meant the used of size. But the message is oddly. > > You should find the compaction stats in the OSD log files. It could be, > as in the bug tracker reasoned, that the compaction needs to much space > and spills over to the slow device. Addionally, if no set extra, the WAL > will take up 512 MB on the DB device. I don't see any indication that compaction needs too much space: 2020-03-24 14:24:04.883 7f03ffbee700? 4 rocksdb: [db/db_impl.cc:777] ------- DUMPING STATS ------- 2020-03-24 14:24:04.883 7f03ffbee700? 4 rocksdb: [db/db_impl.cc:778] ** DB Stats ** Uptime(secs): 15000.1 total, 600.0 interval Cumulative writes: 4646 writes, 18K keys, 4646 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.00 MB/s Cumulative WAL: 4646 writes, 1891 syncs, 2.46 writes per sync, written: 0.01 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 163 writes, 637 keys, 163 commit groups, 1.0 writes per commit group, ingest: 0.63 MB, 0.00 MB/s Interval WAL: 163 writes, 67 syncs, 2.40 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent ** Compaction Stats [default] ** Level??? Files?? Size???? Score Read(GB)? Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ? L0????? 0/0??? 0.00 KB?? 0.0????? 0.0???? 0.0????? 0.0 0.0????? 0.0?????? 0.0?? 1.0????? 0.0???? 33.4 0.02????????????? 0.00???????? 2??? 0.009?????? 0????? 0 ? L1????? 0/0??? 0.00 KB?? 0.0????? 0.0???? 0.0????? 0.0 0.0????? 0.0?????? 0.0?? 0.8??? 162.1??? 134.6 0.09????????????? 0.06???????? 1??? 0.092??? 127K??? 10K ? L2????? 9/0?? 538.64 MB?? 0.2????? 0.5???? 0.0????? 0.5 0.5????? 0.0?????? 0.0? 43.6??? 102.7??? 101.2 5.32????????????? 1.31???????? 1??? 5.325?? 1496K?? 110K ?Sum????? 9/0?? 538.64 MB?? 0.0????? 0.5???? 0.0????? 0.5 0.5????? 0.0?????? 0.0 961.1??? 103.3??? 101.5 5.43????????????? 1.37???????? 4??? 1.358?? 1623K?? 121K ?Int????? 0/0??? 0.00 KB?? 0.0????? 0.0???? 0.0????? 0.0 0.0????? 0.0?????? 0.0?? 0.0????? 0.0????? 0.0 0.00????????????? 0.00???????? 0??? 0.000?????? 0????? 0 ** Compaction Stats [default] ** Priority??? Files?? Size???? Score Read(GB)? Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ?Low????? 0/0??? 0.00 KB?? 0.0????? 0.5???? 0.0????? 0.5 0.5????? 0.0?????? 0.0?? 0.0??? 103.7??? 101.7 5.42????????????? 1.36???????? 2??? 2.708?? 1623K?? 121K High????? 0/0??? 0.00 KB?? 0.0????? 0.0???? 0.0????? 0.0 0.0????? 0.0?????? 0.0?? 0.0????? 0.0???? 43.9 0.01????????????? 0.00???????? 1??? 0.013?????? 0????? 0 User????? 0/0??? 0.00 KB?? 0.0????? 0.0???? 0.0????? 0.0 0.0????? 0.0?????? 0.0?? 0.0????? 0.0????? 0.4 0.00????????????? 0.00???????? 1??? 0.004?????? 0????? 0 Uptime(secs): 15000.1 total, 600.0 interval Flush(GB): cumulative 0.001, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.54 GB write, 0.04 MB/s write, 0.55 GB read, 0.04 MB/s read, 5.4 seconds Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count I see the following in a perf dump: ??? "bluefs": { ??????? "gift_bytes": 0, ??????? "reclaim_bytes": 0, ??????? "db_total_bytes": 6442442752, ??????? "db_used_bytes": 696246272, ??????? "wal_total_bytes": 0, ??????? "wal_used_bytes": 0, ??????? "slow_total_bytes": 40004222976, ??????? "slow_used_bytes": 5242880, ??????? "num_files": 20, ??????? "log_bytes": 41631744, ??????? "log_compactions": 0, ??????? "logged_bytes": 40550400, ??????? "files_written_wal": 2, ??????? "files_written_sst": 41, ??????? "bytes_written_wal": 102040973, ??????? "bytes_written_sst": 2233090674, ??????? "bytes_written_slow": 0, ??????? "max_bytes_wal": 0, ??????? "max_bytes_db": 1153425408, ??????? "max_bytes_slow": 0, ??????? "read_random_count": 127832, ??????? "read_random_bytes": 2761102524, ??????? "read_random_disk_count": 19206, ??????? "read_random_disk_bytes": 2330400597, ??????? "read_random_buffer_count": 108844, ??????? "read_random_buffer_bytes": 430701927, ??????? "read_count": 21457, ??????? "read_bytes": 1087948189, ??????? "read_prefetch_count": 21438, ??????? "read_prefetch_bytes": 1086853927 ??? }, > If the above doesn't give any information then you may need to export > the bluefs (RocksDB). Then you can run the kvstore-tool on it. I'll look to try this, although I'd say it's some kind of bug. > >>> The easiest way ist to destroy and re-create the OSD with a bigger >>> DB/WAL. The guideline from Facebook for RocksDB is 3/30/300 GB. >> It's well below the 3GiB limit in the guideline ;) > For now. ;) Cluster has 2 years now, data amount is quite stable, I think it will hold for some time ;) Thanks a lot Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From a.antreich at proxmox.com Wed Mar 25 11:55:12 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Wed, 25 Mar 2020 11:55:12 +0100 Subject: [PVE-User] Spillover issue In-Reply-To: <6104c080-a479-1a17-78d7-ffd232b20aed@binovo.es> References: <17dfcaaa-1054-6d10-ec3d-da4567a4ae41@binovo.es> <20200324112423.GI46219@dona.proxmox.com> <7889deca-f7af-2d89-8a97-d05c67a69d5b@binovo.es> <20200324135404.GJ46219@dona.proxmox.com> <6104c080-a479-1a17-78d7-ffd232b20aed@binovo.es> Message-ID: <20200325105512.GK46219@dona.proxmox.com> On Wed, Mar 25, 2020 at 08:43:41AM +0100, Eneko Lacunza wrote: > Hi Alwin, > > El 24/3/20 a las 14:54, Alwin Antreich escribi?: > > On Tue, Mar 24, 2020 at 01:12:03PM +0100, Eneko Lacunza wrote: > > > Hi Allwin, > > > > > > El 24/3/20 a las 12:24, Alwin Antreich escribi?: > > > > On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote: > > > > > We're seeing a spillover issue with Ceph, using 14.2.8: > > > [...] > > > > > 3. ceph health detail > > > > > HEALTH_WARN BlueFS spillover detected on 3 OSD > > > > > BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD > > > > > osd.3 spilled over 5 MiB metadata from 'db' device (556 MiB used of > > > > > 6.0 GiB) to slow device > > > > > osd.4 spilled over 5 MiB metadata from 'db' device (552 MiB used of > > > > > 6.0 GiB) to slow device > > > > > osd.5 spilled over 5 MiB metadata from 'db' device (551 MiB used of > > > > > 6.0 GiB) to slow device > > > > > > > > > > I may be overlooking something, any idea? Just found also the following ceph > > > > > issue: > > > > > > > > > > https://tracker.ceph.com/issues/38745 > > > > > > > > > > 5MiB of metadata in slow isn't a big problem, but cluster is permanently in > > > > > health Warning state... :) > > > > The DB/WAL device is to small and all the new metadata has to be written > > > > to the slow device. This will destroy performance. > > > > > > > > I think the size changes, as the DB gets compacted. > > > Yes. But it isn't too small... it's 6 GiB and there's only ~560MiB of data. > > Yes true. I meant the used of size. But the message is oddly. > > > > You should find the compaction stats in the OSD log files. It could be, > > as in the bug tracker reasoned, that the compaction needs to much space > > and spills over to the slow device. Addionally, if no set extra, the WAL > > will take up 512 MB on the DB device. > I don't see any indication that compaction needs too much space: > > 2020-03-24 14:24:04.883 7f03ffbee700? 4 rocksdb: [db/db_impl.cc:777] ------- > DUMPING STATS ------- > 2020-03-24 14:24:04.883 7f03ffbee700? 4 rocksdb: [db/db_impl.cc:778] > ** DB Stats ** > Uptime(secs): 15000.1 total, 600.0 interval > Cumulative writes: 4646 writes, 18K keys, 4646 commit groups, 1.0 writes per > commit group, ingest: 0.01 GB, 0.00 MB/s > Cumulative WAL: 4646 writes, 1891 syncs, 2.46 writes per sync, written: 0.01 > GB, 0.00 MB/s > Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent > Interval writes: 163 writes, 637 keys, 163 commit groups, 1.0 writes per > commit group, ingest: 0.63 MB, 0.00 MB/s > Interval WAL: 163 writes, 67 syncs, 2.40 writes per sync, written: 0.00 MB, > 0.00 MB/s > Interval stall: 00:00:0.000 H:M:S, 0.0 percent > > ** Compaction Stats [default] ** > Level??? Files?? Size???? Score Read(GB)? Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) > Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) > Avg(sec) KeyIn KeyDrop > ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > ? L0????? 0/0??? 0.00 KB?? 0.0????? 0.0???? 0.0????? 0.0 0.0????? 0.0?????? > 0.0?? 1.0????? 0.0???? 33.4 0.02????????????? 0.00???????? 2??? 0.009?????? > 0????? 0 > ? L1????? 0/0??? 0.00 KB?? 0.0????? 0.0???? 0.0????? 0.0 0.0????? 0.0?????? > 0.0?? 0.8??? 162.1??? 134.6 0.09????????????? 0.06???????? 1??? 0.092??? > 127K??? 10K > ? L2????? 9/0?? 538.64 MB?? 0.2????? 0.5???? 0.0????? 0.5 0.5????? 0.0?????? > 0.0? 43.6??? 102.7??? 101.2 5.32????????????? 1.31???????? 1??? 5.325?? > 1496K?? 110K > ?Sum????? 9/0?? 538.64 MB?? 0.0????? 0.5???? 0.0????? 0.5 0.5????? 0.0?????? > 0.0 961.1??? 103.3??? 101.5 5.43????????????? 1.37???????? 4??? 1.358?? > 1623K?? 121K > ?Int????? 0/0??? 0.00 KB?? 0.0????? 0.0???? 0.0????? 0.0 0.0????? 0.0?????? > 0.0?? 0.0????? 0.0????? 0.0 0.00????????????? 0.00???????? 0??? 0.000?????? > 0????? 0 > > ** Compaction Stats [default] ** > Priority??? Files?? Size???? Score Read(GB)? Rn(GB) Rnp1(GB) Write(GB) > Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) > Comp(cnt) Avg(sec) KeyIn KeyDrop > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > ?Low????? 0/0??? 0.00 KB?? 0.0????? 0.5???? 0.0????? 0.5 0.5????? 0.0?????? > 0.0?? 0.0??? 103.7??? 101.7 5.42????????????? 1.36???????? 2??? 2.708?? > 1623K?? 121K > High????? 0/0??? 0.00 KB?? 0.0????? 0.0???? 0.0????? 0.0 0.0????? 0.0?????? > 0.0?? 0.0????? 0.0???? 43.9 0.01????????????? 0.00???????? 1??? 0.013?????? > 0????? 0 > User????? 0/0??? 0.00 KB?? 0.0????? 0.0???? 0.0????? 0.0 0.0????? 0.0?????? > 0.0?? 0.0????? 0.0????? 0.4 0.00????????????? 0.00???????? 1??? 0.004?????? > 0????? 0 > Uptime(secs): 15000.1 total, 600.0 interval > Flush(GB): cumulative 0.001, interval 0.000 > AddFile(GB): cumulative 0.000, interval 0.000 > AddFile(Total Files): cumulative 0, interval 0 > AddFile(L0 Files): cumulative 0, interval 0 > AddFile(Keys): cumulative 0, interval 0 > Cumulative compaction: 0.54 GB write, 0.04 MB/s write, 0.55 GB read, 0.04 > MB/s read, 5.4 seconds > Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s > read, 0.0 seconds > Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 > level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for > pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 > memtable_compaction, 0 memtable_slowdown, interval 0 total count > > I see the following in a perf dump: > > ??? "bluefs": { > ??????? "gift_bytes": 0, > ??????? "reclaim_bytes": 0, > ??????? "db_total_bytes": 6442442752, > ??????? "db_used_bytes": 696246272, > ??????? "wal_total_bytes": 0, > ??????? "wal_used_bytes": 0, > ??????? "slow_total_bytes": 40004222976, > ??????? "slow_used_bytes": 5242880, > ??????? "num_files": 20, > ??????? "log_bytes": 41631744, > ??????? "log_compactions": 0, > ??????? "logged_bytes": 40550400, > ??????? "files_written_wal": 2, > ??????? "files_written_sst": 41, > ??????? "bytes_written_wal": 102040973, > ??????? "bytes_written_sst": 2233090674, > ??????? "bytes_written_slow": 0, > ??????? "max_bytes_wal": 0, > ??????? "max_bytes_db": 1153425408, > ??????? "max_bytes_slow": 0, > ??????? "read_random_count": 127832, > ??????? "read_random_bytes": 2761102524, > ??????? "read_random_disk_count": 19206, > ??????? "read_random_disk_bytes": 2330400597, > ??????? "read_random_buffer_count": 108844, > ??????? "read_random_buffer_bytes": 430701927, > ??????? "read_count": 21457, > ??????? "read_bytes": 1087948189, > ??????? "read_prefetch_count": 21438, > ??????? "read_prefetch_bytes": 1086853927 > ??? }, > > > > If the above doesn't give any information then you may need to export > > the bluefs (RocksDB). Then you can run the kvstore-tool on it. > I'll look to try this, although I'd say it's some kind of bug. > > > > > > The easiest way ist to destroy and re-create the OSD with a bigger > > > > DB/WAL. The guideline from Facebook for RocksDB is 3/30/300 GB. > > > It's well below the 3GiB limit in the guideline ;) > > For now. ;) > Cluster has 2 years now, data amount is quite stable, I think it will hold > for some time ;) Hm... Igor recons that this seems to be normal. https://tracker.ceph.com/issues/38745#note-28 -- Cheers, Alwin From elacunza at binovo.es Wed Mar 25 12:27:56 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Wed, 25 Mar 2020 12:27:56 +0100 Subject: [PVE-User] Spillover issue In-Reply-To: <20200325105512.GK46219@dona.proxmox.com> References: <17dfcaaa-1054-6d10-ec3d-da4567a4ae41@binovo.es> <20200324112423.GI46219@dona.proxmox.com> <7889deca-f7af-2d89-8a97-d05c67a69d5b@binovo.es> <20200324135404.GJ46219@dona.proxmox.com> <6104c080-a479-1a17-78d7-ffd232b20aed@binovo.es> <20200325105512.GK46219@dona.proxmox.com> Message-ID: <1705a3a7-8d5e-999b-0bfb-f8b9f155a49c@binovo.es> Hi Alwin, El 25/3/20 a las 11:55, Alwin Antreich escribi?: > >>>>> The easiest way ist to destroy and re-create the OSD with a bigger >>>>> DB/WAL. The guideline from Facebook for RocksDB is 3/30/300 GB. >>>> It's well below the 3GiB limit in the guideline ;) >>> For now. ;) >> Cluster has 2 years now, data amount is quite stable, I think it will hold >> for some time ;) > Hm... Igor recons that this seems to be normal. > https://tracker.ceph.com/issues/38745#note-28 Thanks for the heads up, I didn't receive an email from tracker about the update. Just replied with requested info. I think it's "normal" in the sense that it is the way code works now, but there doesn't seem to be a good reason. I'm not really worried about the performance, but about having to suppress the spillover warning ;) Thanks a lot Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From a.antreich at proxmox.com Wed Mar 25 13:32:39 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Wed, 25 Mar 2020 13:32:39 +0100 Subject: [PVE-User] Spillover issue In-Reply-To: <1705a3a7-8d5e-999b-0bfb-f8b9f155a49c@binovo.es> References: <17dfcaaa-1054-6d10-ec3d-da4567a4ae41@binovo.es> <20200324112423.GI46219@dona.proxmox.com> <7889deca-f7af-2d89-8a97-d05c67a69d5b@binovo.es> <20200324135404.GJ46219@dona.proxmox.com> <6104c080-a479-1a17-78d7-ffd232b20aed@binovo.es> <20200325105512.GK46219@dona.proxmox.com> <1705a3a7-8d5e-999b-0bfb-f8b9f155a49c@binovo.es> Message-ID: <20200325123239.GL46219@dona.proxmox.com> On Wed, Mar 25, 2020 at 12:27:56PM +0100, Eneko Lacunza wrote: > Hi Alwin, > > El 25/3/20 a las 11:55, Alwin Antreich escribi?: > > > > > > > > The easiest way ist to destroy and re-create the OSD with a bigger > > > > > > DB/WAL. The guideline from Facebook for RocksDB is 3/30/300 GB. > > > > > It's well below the 3GiB limit in the guideline ;) > > > > For now. ;) > > > Cluster has 2 years now, data amount is quite stable, I think it will hold > > > for some time ;) > > Hm... Igor recons that this seems to be normal. > > https://tracker.ceph.com/issues/38745#note-28 > Thanks for the heads up, I didn't receive an email from tracker about the > update. Just replied with requested info. > > I think it's "normal" in the sense that it is the way code works now, but > there doesn't seem to be a good reason. I'm not really worried about the > performance, but about having to suppress the spillover warning ;) Yeah. I hope this has been resolved when we introduce Ceph Octopus. :) -- Cheers, Alwin From gilberto.nunes32 at gmail.com Wed Mar 25 16:00:11 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Wed, 25 Mar 2020 12:00:11 -0300 Subject: [PVE-User] Use LVM from XenServerf into Proxmox 6 Message-ID: Hi there! I have installed Proxmox 6 in a former XenServer. Now I have this LVM Physical Volume, that comes from XenServer... Is there any way to convert this PV to use with Proxmox?? Thanks! pve3:~# pvs /dev/sdb: open failed: No medium found Couldn't find device with uuid 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf. PV VG Fmt Attr PSize PFree /dev/sda3 pve lvm2 a-- <119.50g <81.75g /dev/sdc VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- <419.18g <419.18g /dev/sdd VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- <419.18g <419.18g /dev/sde VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- <419.18g <419.18g /dev/sdf VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- <419.18g <419.18g /dev/sdg VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- <419.18g <419.18g /dev/sdh VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- <419.18g <419.18g /dev/sdi VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- <419.18g <419.18g /dev/sdj VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- <419.18g <419.18g /dev/sdk VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- <419.18g <419.18g [unknown] VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-m <516.86g <516.86g pve3:~# lvs /dev/sdb: open failed: No medium found Couldn't find device with uuid 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert MGT VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b -wi-----p- 4.00m root pve -wi-ao---- 29.75g swap pve -wi-ao---- 8.00g pve3:~# --- Gilberto Nunes Ferreira From devzero at web.de Thu Mar 26 10:18:56 2020 From: devzero at web.de (Roland) Date: Thu, 26 Mar 2020 10:18:56 +0100 Subject: [PVE-User] Use LVM from XenServerf into Proxmox 6 In-Reply-To: References: Message-ID: <63c5f4c6-a993-5515-9937-5ade8ff33206@web.de> maybe - but why? if there is no data to be preserved, i would zap it with wipefs and freshly re-initialize from proxmox webgui roland Am 25.03.20 um 16:00 schrieb Gilberto Nunes: > Hi there! I have installed Proxmox 6 in a former XenServer. Now I have this > LVM Physical Volume, that comes from XenServer... Is there any way to > convert this PV to use with Proxmox?? Thanks! > > pve3:~# pvs > /dev/sdb: open failed: No medium found > Couldn't find device with uuid 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf. > PV VG Fmt Attr > PSize PFree > /dev/sda3 pve lvm2 a-- > <119.50g <81.75g > /dev/sdc VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > <419.18g <419.18g > /dev/sdd VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > <419.18g <419.18g > /dev/sde VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > <419.18g <419.18g > /dev/sdf VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > <419.18g <419.18g > /dev/sdg VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > <419.18g <419.18g > /dev/sdh VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > <419.18g <419.18g > /dev/sdi VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > <419.18g <419.18g > /dev/sdj VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > <419.18g <419.18g > /dev/sdk VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > <419.18g <419.18g > [unknown] VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-m > <516.86g <516.86g > pve3:~# lvs > /dev/sdb: open failed: No medium found > Couldn't find device with uuid 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf. > LV VG Attr LSize > Pool Origin Data% Meta% Move Log > Cpy%Sync Convert > MGT VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b -wi-----p- 4.00m > > > root pve -wi-ao---- 29.75g > > > swap pve -wi-ao---- 8.00g > > > pve3:~# > --- > Gilberto Nunes Ferreira > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From gilberto.nunes32 at gmail.com Thu Mar 26 12:30:15 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 26 Mar 2020 08:30:15 -0300 Subject: [PVE-User] Use LVM from XenServerf into Proxmox 6 In-Reply-To: <63c5f4c6-a993-5515-9937-5ade8ff33206@web.de> References: <63c5f4c6-a993-5515-9937-5ade8ff33206@web.de> Message-ID: Of course there is data that I need get back otherwise I already had wiped all partitions. Em qui, 26 de mar de 2020 06:18, Roland escreveu: > maybe - but why? > > if there is no data to be preserved, i would zap it with wipefs and > freshly re-initialize from proxmox webgui > > roland > > Am 25.03.20 um 16:00 schrieb Gilberto Nunes: > > Hi there! I have installed Proxmox 6 in a former XenServer. Now I have > this > > LVM Physical Volume, that comes from XenServer... Is there any way to > > convert this PV to use with Proxmox?? Thanks! > > > > pve3:~# pvs > > /dev/sdb: open failed: No medium found > > Couldn't find device with uuid 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf. > > PV VG Fmt > Attr > > PSize PFree > > /dev/sda3 pve lvm2 a-- > > <119.50g <81.75g > > /dev/sdc VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > > <419.18g <419.18g > > /dev/sdd VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > > <419.18g <419.18g > > /dev/sde VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > > <419.18g <419.18g > > /dev/sdf VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > > <419.18g <419.18g > > /dev/sdg VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > > <419.18g <419.18g > > /dev/sdh VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > > <419.18g <419.18g > > /dev/sdi VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > > <419.18g <419.18g > > /dev/sdj VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > > <419.18g <419.18g > > /dev/sdk VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-- > > <419.18g <419.18g > > [unknown] VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 a-m > > <516.86g <516.86g > > pve3:~# lvs > > /dev/sdb: open failed: No medium found > > Couldn't find device with uuid 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf. > > LV VG Attr > LSize > > Pool Origin Data% Meta% Move Log > > Cpy%Sync Convert > > MGT VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b -wi-----p- > 4.00m > > > > > > root pve -wi-ao---- > 29.75g > > > > > > swap pve -wi-ao---- > 8.00g > > > > > > pve3:~# > > --- > > Gilberto Nunes Ferreira > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From elacunza at binovo.es Thu Mar 26 13:03:14 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 26 Mar 2020 13:03:14 +0100 Subject: [PVE-User] Use LVM from XenServerf into Proxmox 6 In-Reply-To: <63c5f4c6-a993-5515-9937-5ade8ff33206@web.de> References: <63c5f4c6-a993-5515-9937-5ade8ff33206@web.de> Message-ID: Hi Gilberto, You need to fix your LVM first (not a Proxmox issue). I see you have a lot of PVs, but no (old) LVs show. Also, you seem to have missing at least a PV (/dev/sdb?) Fix that first, then let's see what output give "vgs" and "lvs". You need to see VMs disks with "lvs" first. The you can create .conf files for proxmox and rename Xen LVs. Cheers Eneko El 26/3/20 a las 10:18, Roland escribi?: > maybe - but why? > > if there is no data to be preserved, i would zap it with wipefs and > freshly re-initialize from proxmox webgui > > roland > > Am 25.03.20 um 16:00 schrieb Gilberto Nunes: >> Hi there! I have installed Proxmox 6 in a former XenServer. Now I >> have this >> LVM Physical Volume, that comes from XenServer... Is there any way to >> convert this PV to use with Proxmox?? Thanks! >> >> pve3:~# pvs >> ?? /dev/sdb: open failed: No medium found >> ?? Couldn't find device with uuid >> 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf. >> ?? PV???????? VG Fmt? Attr >> PSize??? PFree >> ?? /dev/sda3? pve lvm2 a-- >> ? <119.50g? <81.75g >> ?? /dev/sdc?? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 >> a-- >> ? <419.18g <419.18g >> ?? /dev/sdd?? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 >> a-- >> ? <419.18g <419.18g >> ?? /dev/sde?? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 >> a-- >> ? <419.18g <419.18g >> ?? /dev/sdf?? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 >> a-- >> ? <419.18g <419.18g >> ?? /dev/sdg?? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 >> a-- >> ? <419.18g <419.18g >> ?? /dev/sdh?? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 >> a-- >> ? <419.18g <419.18g >> ?? /dev/sdi?? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 >> a-- >> ? <419.18g <419.18g >> ?? /dev/sdj?? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 >> a-- >> ? <419.18g <419.18g >> ?? /dev/sdk?? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 >> a-- >> ? <419.18g <419.18g >> ?? [unknown]? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b lvm2 >> a-m >> ? <516.86g <516.86g >> pve3:~# lvs >> ?? /dev/sdb: open failed: No medium found >> ?? Couldn't find device with uuid >> 4SdKfV-FHGs-yxMV-2Rac-uhBF-XelE-DLycOf. >> ?? LV?? VG Attr?????? LSize >> ? Pool Origin Data%? Meta%? Move Log >> ? Cpy%Sync Convert >> ?? MGT? VG_XenStorage-641fb092-12ff-4897-f8d2-9dd45d502e8b >> -wi-----p-? 4.00m >> >> >> ?? root pve -wi-ao---- 29.75g >> >> >> ?? swap pve -wi-ao----? 8.00g >> >> >> pve3:~# >> --- >> Gilberto Nunes Ferreira >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From vadim.bulst at bbz.uni-leipzig.de Thu Mar 26 18:34:38 2020 From: vadim.bulst at bbz.uni-leipzig.de (Vadim Bulst) Date: Thu, 26 Mar 2020 18:34:38 +0100 Subject: [PVE-User] Installation aborted Message-ID: <3a0a1a70-0fcb-bbea-233f-11429f72c7a8@bbz.uni-leipzig.de> Hi PVE users, I'm using PVE for some years now. I got a runnig cluster of 6 machines in version 6.x . Today I was going to add to additional machines. But the installation was aborting on both servers? There are two 10G Nics for the frontend network which will be trunked later on and 56G IB nic for the backend network . For some reason the installation is aborting because of no dhcp lease. See the screenshot : any idea or workaround? Cheers, Vadim -- Vadim Bulst Universit?t Leipzig / URZ 04109 Leipzig, Augustusplatz 10 phone: +49-341-97-33380 mail:vadim.bulst at uni-leipzig.de From f.gruenbichler at proxmox.com Fri Mar 27 08:11:42 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Fri, 27 Mar 2020 08:11:42 +0100 Subject: [PVE-User] Installation aborted In-Reply-To: <3a0a1a70-0fcb-bbea-233f-11429f72c7a8@bbz.uni-leipzig.de> References: <3a0a1a70-0fcb-bbea-233f-11429f72c7a8@bbz.uni-leipzig.de> Message-ID: <1585293004.atwyz6xkzb.astroid@nora.none> On March 26, 2020 6:34 pm, Vadim Bulst wrote: > Hi PVE users, > > I'm using PVE for some years now. I got a runnig cluster of 6 machines > in version 6.x . Today I was going to add to additional machines. But > the installation was aborting on both servers? There are two 10G Nics > for the frontend network which will be trunked later on and 56G IB nic > for the backend network . For some reason the installation is aborting > because of no dhcp lease. See the screenshot : screenshots (all attachments in fact) get removed on the list. the no dhcp lease message is usually just the last thing that gets printed, and not the actual cause. can you try booting in Debug mode, and checking /tmp/install.log and dmesg for errors? > any idea or workaround? did you try the 6.1 iso? could you give the 6.0 a try (or vice-versa ;)) From gilberto.nunes32 at gmail.com Sat Mar 28 15:04:12 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Sat, 28 Mar 2020 11:04:12 -0300 Subject: [PVE-User] Help with Ceph in PVE6 Message-ID: Help with Ceph in PVE 6 Hi I have a ceph cluster created with 3 server.... ServerA has 3 SAS 512GB HDD and 1 SAS 1.3 TB ServerB has 3 SAS 512GB HDD and 1 SAS 1.3 TB ServerS has 3 SAS 512GB HDD and 1 SAS 1.3 TB I have one pool named VMS with size/min 3/2 and pg_num initially created with 256 but I have increased to 512 and an hour ago to 768 but it's see hava not effect on it... Ceph health apparently is ok but get this with ceph -s command: ceph -s cluster: id: 93c55c6b-ce64-4e1a-92bc-0bc529d695f2 health: HEALTH_OK services: mon: 5 daemons, quorum pve3,pve4,pve5,pve7,pve6 (age 15h) mgr: pve3(active, since 15h), standbys: pve4, pve5, pve7, pve6 osd: 12 osds: 12 up (since 10m), 12 in (since 10m); 497 remapped pgs data: pools: 1 pools, 768 pgs objects: 279.34k objects, 1.1 TiB usage: 3.0 TiB used, 6.2 TiB / 9.1 TiB avail pgs: 375654/838011 objects misplaced (44.827%) 494 active+remapped+backfill_wait 271 active+clean 3 active+remapped+backfilling io: client: 140 KiB/s rd, 397 KiB/s wr, 12 op/s rd, 64 op/s wr recovery: 52 MiB/s, 14 objects/s Is there any action I can take to fix this? Thanks --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 From gilberto.nunes32 at gmail.com Sat Mar 28 15:08:24 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Sat, 28 Mar 2020 11:08:24 -0300 Subject: [PVE-User] Help with Ceph in PVE6 In-Reply-To: References: Message-ID: [UPDATE] I notice that in [node] -> Ceph -> Pool in Used % column the values is decrease over time! Perhaps need wait to adjust it and than see if active+remapped+backfill_wait and active+remapped+backfilling end it's operations... --- Gilberto Nunes Ferreira Em s?b., 28 de mar. de 2020 ?s 11:04, Gilberto Nunes < gilberto.nunes32 at gmail.com> escreveu: > Help with Ceph in PVE 6 > > Hi > > I have a ceph cluster created with 3 server.... > ServerA has 3 SAS 512GB HDD and 1 SAS 1.3 TB > ServerB has 3 SAS 512GB HDD and 1 SAS 1.3 TB > ServerS has 3 SAS 512GB HDD and 1 SAS 1.3 TB > > I have one pool named VMS with size/min 3/2 and pg_num initially created > with 256 but I have increased to 512 and an hour ago to 768 but it's see > hava not effect on it... > > Ceph health apparently is ok but get this with ceph -s command: > > ceph -s > cluster: > id: 93c55c6b-ce64-4e1a-92bc-0bc529d695f2 > health: HEALTH_OK > > services: > mon: 5 daemons, quorum pve3,pve4,pve5,pve7,pve6 (age 15h) > mgr: pve3(active, since 15h), standbys: pve4, pve5, pve7, pve6 > osd: 12 osds: 12 up (since 10m), 12 in (since 10m); 497 remapped pgs > > data: > pools: 1 pools, 768 pgs > objects: 279.34k objects, 1.1 TiB > usage: 3.0 TiB used, 6.2 TiB / 9.1 TiB avail > pgs: 375654/838011 objects misplaced (44.827%) > 494 active+remapped+backfill_wait > 271 active+clean > 3 active+remapped+backfilling > > io: > client: 140 KiB/s rd, 397 KiB/s wr, 12 op/s rd, 64 op/s wr > recovery: 52 MiB/s, 14 objects/s > > > Is there any action I can take to fix this? > > Thanks > --- > Gilberto Nunes Ferreira > > (47) 3025-5907 > (47) 99676-7530 - Whatsapp / Telegram > > Skype: gilberto.nunes36 > > > > From gilberto.nunes32 at gmail.com Sun Mar 29 16:46:47 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Sun, 29 Mar 2020 11:46:47 -0300 Subject: [PVE-User] Some erros in Ceph - PVE6 Message-ID: Hi guys I have installed Proxmox 6 and activate 3 servers with PVE 6 and Ceph. In this 3 server, I have 4 HDD: 3 SAS 500GB 1 SAS 2 TB However, we need to remove this 3 500GB of eache server... So I do out and stop and I am waiting for rebalance, but is took too long... Get this message: Reduced data availability: 2 pgs inactive, 2 pgs down pg 1.3a is down, acting [11,9,10] pg 1.23a is down, acting [11,9,10] (This 11,9,10 it's the 2 TB SAS HDD) And too many PGs per OSD (571 > max 250) I already tried decrease the number of PG to 256 ceph osd pool set VMS pg_num 256 but it seem no effect att all: ceph osd pool get VMS pg_num pg_num: 571 Now, the sitution is that: ceph -s cluster: id: 93c55c6b-ce64-4e1a-92bc-0bc529d695f2 health: HEALTH_WARN Reduced data availability: 2 pgs inactive, 2 pgs down Degraded data redundancy: 6913/836472 objects degraded (0.826%), 18 pgs degraded, 19 pgs undersized too many PGs per OSD (571 > max 250) services: mon: 5 daemons, quorum pve3,pve4,pve5,pve7,pve6 (age 51m) mgr: pve3(active, since 39h), standbys: pve5, pve7, pve6, pve4 osd: 12 osds: 3 up (since 16m), 3 in (since 16m); 19 remapped pgs data: pools: 1 pools, 571 pgs objects: 278.82k objects, 1.1 TiB usage: 2.9 TiB used, 2.5 TiB / 5.5 TiB avail pgs: 0.350% pgs not active 6913/836472 objects degraded (0.826%) 550 active+clean 17 active+undersized+degraded+remapped+backfill_wait 2 down 1 active+undersized+degraded+remapped+backfilling 1 active+undersized+remapped+backfill_wait io: client: 15 KiB/s rd, 1.0 MiB/s wr, 3 op/s rd, 102 op/s wr recovery: 15 MiB/s, 3 objects/s progress: Rebalancing after osd.2 marked out [=============================.] Rebalancing after osd.7 marked out [============================..] Rebalancing after osd.6 marked out [==========....................] Do I need to do something or just leave Ceph do this work?? Thanks a lot! Cheers From alarig at grifon.fr Sun Mar 29 20:24:07 2020 From: alarig at grifon.fr (Alarig Le Lay) Date: Sun, 29 Mar 2020 20:24:07 +0200 Subject: [PVE-User] local cdrom image Message-ID: <20200329182407.tmtk32rumagzhrag@mew.swordarmor.fr> Hi, If an ISO is configured on a VM while I try to migrate it, I have this message: 2020-03-29 20:14:23 can't migrate local disk 'local:iso/debian-10.0.0-amd64-netinst.iso': local cdrom image However, the ISO isn?t local: root at hv02:~# df -Th /var/lib/vz/template/iso Filesystem Type Size Used Avail Use% Mounted on 172.17.0.48:/mnt/storage_iso nfs 49G 5.7G 39G 13% /var/lib/vz/template/iso And this NFS mount-point is also present on the targeted HV: root at hv03:~# df -Th /var/lib/vz/template/iso Filesystem Type Size Used Avail Use% Mounted on 172.17.0.48:/mnt/storage_iso nfs 49G 5.7G 39G 13% /var/lib/vz/template/iso How to get rid of this message? Thanks, -- Alarig Le Lay From gianni.milo22 at gmail.com Sun Mar 29 22:59:42 2020 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Sun, 29 Mar 2020 21:59:42 +0100 Subject: [PVE-User] local cdrom image In-Reply-To: <20200329182407.tmtk32rumagzhrag@mew.swordarmor.fr> References: <20200329182407.tmtk32rumagzhrag@mew.swordarmor.fr> Message-ID: Go to Datacenter -> Storage and add a new (NFS) storage. Set the "content" to include "ISO image". Go to the VM properties and select the iso image from this storage location instead of "local:". On Sun, 29 Mar 2020 at 19:24, Alarig Le Lay wrote: > Hi, > > If an ISO is configured on a VM while I try to migrate it, I have this > message: > > 2020-03-29 20:14:23 can't migrate local disk > 'local:iso/debian-10.0.0-amd64-netinst.iso': local cdrom image > > However, the ISO isn?t local: > > root at hv02:~# df -Th /var/lib/vz/template/iso > Filesystem Type Size Used Avail Use% Mounted on > 172.17.0.48:/mnt/storage_iso nfs 49G 5.7G 39G 13% > /var/lib/vz/template/iso > > And this NFS mount-point is also present on the targeted HV: > > root at hv03:~# df -Th /var/lib/vz/template/iso > Filesystem Type Size Used Avail Use% Mounted on > 172.17.0.48:/mnt/storage_iso nfs 49G 5.7G 39G 13% > /var/lib/vz/template/iso > > How to get rid of this message? > > Thanks, > -- > Alarig Le Lay > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From elacunza at binovo.es Mon Mar 30 09:09:18 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Mon, 30 Mar 2020 09:09:18 +0200 Subject: [PVE-User] Some erros in Ceph - PVE6 In-Reply-To: References: Message-ID: <332baf4a-69bc-d677-3567-7308a001e80c@binovo.es> Hi Gilberto, Generally, you have to wait when Ceph is doing rebalancing etc. until it finishes. Some things can go for hours. Also, try no to change Ceph parameters without being sure and researching documentation and mailing lists. This is a new cluster and you have done things most Ceph users won't do until some years after initial setup :) I suggest next time you have to remove OSD disks from all servers, you do as follows: 1. Out the OSDs (one by one for minimum impact). 2. Wait for rebalancing to finish 3. Down + remove the OSDs https://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual Cheers Eneko El 29/3/20 a las 16:46, Gilberto Nunes escribi?: > Hi guys > > I have installed Proxmox 6 and activate 3 servers with PVE 6 and Ceph. > In this 3 server, I have 4 HDD: > 3 SAS 500GB > 1 SAS 2 TB > However, we need to remove this 3 500GB of eache server... So I do out and > stop and I am waiting for rebalance, but is took too long... > Get this message: > Reduced data availability: 2 pgs inactive, 2 pgs down > pg 1.3a is down, acting [11,9,10] > pg 1.23a is down, acting [11,9,10] > (This 11,9,10 it's the 2 TB SAS HDD) > And > too many PGs per OSD (571 > max 250) > I already tried decrease the number of PG to 256 > ceph osd pool set VMS pg_num 256 > but it seem no effect att all: > ceph osd pool get VMS pg_num > pg_num: 571 > > Now, the sitution is that: > > ceph -s > cluster: > id: 93c55c6b-ce64-4e1a-92bc-0bc529d695f2 > health: HEALTH_WARN > Reduced data availability: 2 pgs inactive, 2 pgs down > Degraded data redundancy: 6913/836472 objects degraded (0.826%), > 18 pgs degraded, 19 pgs undersized > too many PGs per OSD (571 > max 250) > > services: > mon: 5 daemons, quorum pve3,pve4,pve5,pve7,pve6 (age 51m) > mgr: pve3(active, since 39h), standbys: pve5, pve7, pve6, pve4 > osd: 12 osds: 3 up (since 16m), 3 in (since 16m); 19 remapped pgs > > data: > pools: 1 pools, 571 pgs > objects: 278.82k objects, 1.1 TiB > usage: 2.9 TiB used, 2.5 TiB / 5.5 TiB avail > pgs: 0.350% pgs not active > 6913/836472 objects degraded (0.826%) > 550 active+clean > 17 active+undersized+degraded+remapped+backfill_wait > 2 down > 1 active+undersized+degraded+remapped+backfilling > 1 active+undersized+remapped+backfill_wait > > io: > client: 15 KiB/s rd, 1.0 MiB/s wr, 3 op/s rd, 102 op/s wr > recovery: 15 MiB/s, 3 objects/s > > progress: > Rebalancing after osd.2 marked out > [=============================.] > Rebalancing after osd.7 marked out > [============================..] > Rebalancing after osd.6 marked out > [==========....................] > > > Do I need to do something or just leave Ceph do this work?? > > Thanks a lot! > > Cheers > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From vadim.bulst at bbz.uni-leipzig.de Mon Mar 30 22:22:41 2020 From: vadim.bulst at bbz.uni-leipzig.de (vadim.bulst at bbz.uni-leipzig.de) Date: Mon, 30 Mar 2020 22:22:41 +0200 Subject: [PVE-User] Installation aborted In-Reply-To: <1585293004.atwyz6xkzb.astroid@nora.none> References: <3a0a1a70-0fcb-bbea-233f-11429f72c7a8@bbz.uni-leipzig.de> <1585293004.atwyz6xkzb.astroid@nora.none> Message-ID: Hi Fabian, thanks for your reply. I haven't tried? to use the 6.0 iso - sorry. I installed Debian Buster vanilla and added PVE-packages afterwards . At least this was working. To track down the reason is to difficult . These machines are reused to get online services like online class rooms for our students started. 6 TB of ram and 128 Cores each? - so it takes at least 20 min to boot them up. Cheers, Vadim On 27.03.20 08:11, Fabian Gr?nbichler wrote: > On March 26, 2020 6:34 pm, Vadim Bulst wrote: >> Hi PVE users, >> >> I'm using PVE for some years now. I got a runnig cluster of 6 machines >> in version 6.x . Today I was going to add to additional machines. But >> the installation was aborting on both servers? There are two 10G Nics >> for the frontend network which will be trunked later on and 56G IB nic >> for the backend network . For some reason the installation is aborting >> because of no dhcp lease. See the screenshot : > screenshots (all attachments in fact) get removed on the list. the no > dhcp lease message is usually just the last thing that gets printed, and > not the actual cause. can you try booting in Debug mode, and checking > /tmp/install.log and dmesg for errors? > >> any idea or workaround? > did you try the 6.1 iso? could you give the 6.0 a try (or vice-versa ;)) > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Vadim Bulst Universit?t Leipzig / URZ 04109 Leipzig, Augustusplatz 10 phone: +49-341-97-33380 mail: vadim.bulst at uni-leipzig.de From d.csapak at proxmox.com Tue Mar 31 08:19:52 2020 From: d.csapak at proxmox.com (Dominik Csapak) Date: Tue, 31 Mar 2020 08:19:52 +0200 Subject: [PVE-User] proxmox don't detect more than 26 disks ( /dev/sdXX) In-Reply-To: References: <91cccb94-fc8c-23be-8403-2dcec790375a@proxmox.com> Message-ID: <1a563105-944f-cd4d-939a-8601f9aa8f9d@proxmox.com> hi, sorry for the late reply (i was on holiday) could you post a dmesg output of both kernels? (the lspci looked the same) regards On 3/16/20 3:50 PM, Humberto Jose De Sousa via pve-user wrote: > 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890 Northbridge only dual slot (2x16) PCI-e GFX Hydra part (rev 02) > Subsystem: Hewlett-Packard Company RD890 Northbridge only dual slot (2x16) PCI-e GFX Hydra part > 00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD/ATI] RD890S/RD990 I/O Memory Management Unit (IOMMU) > Subsystem: Hewlett-Packard Company RD890S/RD990 I/O Memory Management Unit (IOMMU) > 00:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GPP Port 0) > Kernel driver in use: pcieport > Kernel modules: shpchp > 00:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GPP Port 5) > Kernel driver in use: pcieport > Kernel modules: shpchp > 00:0b.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD990 PCI to PCI bridge (PCI Express GFX2 port 0) > Kernel driver in use: pcieport > Kernel modules: shpchp > 00:0c.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD990 PCI to PCI bridge (PCI Express GFX2 port 1) > Kernel driver in use: pcieport > Kernel modules: shpchp > 00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] > Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] > Kernel driver in use: ahci > Kernel modules: ahci > 00:12.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller > Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB OHCI0 Controller > Kernel driver in use: ohci-pci > 00:12.1 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0 USB OHCI1 Controller > Subsystem: Hewlett-Packard Company SB7x0 USB OHCI1 Controller > Kernel driver in use: ohci-pci > 00:12.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller > Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB EHCI Controller > Kernel driver in use: ehci-pci > 00:13.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller > Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB OHCI0 Controller > Kernel driver in use: ohci-pci > 00:13.1 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0 USB OHCI1 Controller > Subsystem: Hewlett-Packard Company SB7x0 USB OHCI1 Controller > Kernel driver in use: ohci-pci > 00:13.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller > Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 USB EHCI Controller > Kernel driver in use: ehci-pci > 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 SMBus Controller (rev 3d) > Subsystem: Hewlett-Packard Company SBx00 SMBus Controller > Kernel driver in use: piix4_smbus > Kernel modules: i2c_piix4, sp5100_tco > 00:14.1 IDE interface: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 IDE Controller > Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 IDE Controller > Kernel driver in use: pata_atiixp > Kernel modules: pata_atiixp, pata_acpi > 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 LPC host controller > Subsystem: Hewlett-Packard Company SB7x0/SB8x0/SB9x0 LPC host controller > 00:14.4 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 PCI to PCI Bridge > 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 > 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 > 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 > 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 > Kernel driver in use: k10temp > Kernel modules: k10temp > 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 > Kernel driver in use: fam15h_power > Kernel modules: fam15h_power > 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 > 00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 > 00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 > 00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 > 00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 > Kernel driver in use: k10temp > Kernel modules: k10temp > 00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 > Kernel modules: fam15h_power > 00:19.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 > 00:1a.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 > 00:1a.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 > 00:1a.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 > 00:1a.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 > Kernel driver in use: k10temp > Kernel modules: k10temp > 00:1a.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 > Kernel driver in use: fam15h_power > Kernel modules: fam15h_power > 00:1a.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 > 00:1b.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 0 > 00:1b.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 1 > 00:1b.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 2 > 00:1b.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 3 > Kernel driver in use: k10temp > Kernel modules: k10temp > 00:1b.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 4 > Kernel modules: fam15h_power > 00:1b.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 15h Processor Function 5 > 02:00.0 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Slave Instrumentation & System Support (rev 05) > Subsystem: Hewlett-Packard Company iLO4 > Kernel modules: hpwdt > 02:00.1 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200EH > Subsystem: Hewlett-Packard Company iLO4 > Kernel driver in use: mgag200 > Kernel modules: mgag200 > 02:00.2 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Management Processor Support and Messaging (rev 05) > Subsystem: Hewlett-Packard Company iLO4 > Kernel driver in use: hpilo > Kernel modules: hpilo > 02:00.4 USB controller: Hewlett-Packard Company Integrated Lights-Out Standard Virtual USB Controller (rev 02) > Subsystem: Hewlett-Packard Company iLO4 > Kernel driver in use: uhci_hcd > 03:00.0 RAID bus controller: Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01) > Subsystem: Hewlett-Packard Company P220i > Kernel driver in use: hpsa > Kernel modules: hpsa > 04:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) > Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] > Kernel driver in use: bnx2x > Kernel modules: bnx2x > 04:00.1 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) > Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] > Kernel driver in use: bnx2x > Kernel modules: bnx2x > 04:00.2 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) > Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] > Kernel driver in use: bnx2x > Kernel modules: bnx2x > 04:00.3 Ethernet controller: Broadcom Limited NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) > Subsystem: Hewlett-Packard Company Flex-10 10Gb 2-port 530FLB Adapter [Meru] > Kernel driver in use: bnx2x > Kernel modules: bnx2x > 05:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) > Subsystem: Hewlett-Packard Company ISP2532-based 8Gb Fibre Channel to PCI Express HBA > Kernel driver in use: qla2xxx > Kernel modules: qla2xxx > 05:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) > Subsystem: Hewlett-Packard Company ISP2532-based 8Gb Fibre Channel to PCI Express HBA > Kernel driver in use: qla2xxx > Kernel modules: qla2xxx From gbr at majentis.com Wed Mar 18 17:15:32 2020 From: gbr at majentis.com (Gerald Brandt) Date: Wed, 18 Mar 2020 11:15:32 -0500 Subject: [PVE-User] Cannot start VM - timeout waiting on systemd Message-ID: I get a timeout waiting on systemd when I try to start a VM. Any ideas. It's not CPU load or memory. Mar 18 11:11:58 proxmox-1 pvedaemon[5035]: start VM 141: UPID:proxmox-1:000013AB:0838CFFC:5E72484E:qmstart:141:root at pam: Mar 18 11:11:58 proxmox-1 pvedaemon[2463]: starting task UPID:proxmox-1:000013AB:0838CFFC:5E72484E:qmstart:141:root at pam: Mar 18 11:12:00 proxmox-1 systemd[1]: Starting Proxmox VE replication runner... Mar 18 11:12:01 proxmox-1 pmxcfs[3161]: [status] notice: received log Mar 18 11:12:02 proxmox-1 systemd[1]: Started Session 459 of user root. Mar 18 11:12:05 proxmox-1 systemd[1]: Started Proxmox VE replication runner. Mar 18 11:12:06 proxmox-1 pvedaemon[5035]: timeout waiting on systemd Mar 18 11:12:06 proxmox-1 pvedaemon[2463]: end task UPID:proxmox-1:000013AB:0838CFFC:5E72484E:qmstart:141:root at pam: timeout waiting on systemd Mar 18 11:12:07 proxmox-1 qm[5123]: VM 141 qmp command failed - VM 141 not running Mar 18 11:12:08 proxmox-1 pmxcfs[3161]: [status] notice: received log Mar 18 11:12:08 proxmox-1 kernel: [1379441.284039] INFO: task kvm:5081 blocked for more than 120 seconds. Mar 18 11:12:08 proxmox-1 kernel: [1379441.284218]?????? Tainted: P?????????? O???? 4.15.18-24-pve #1 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284358] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Mar 18 11:12:08 proxmox-1 kernel: [1379441.284540] kvm D??? 0? 5081????? 1 0x80000006 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284553] Call Trace: Mar 18 11:12:08 proxmox-1 kernel: [1379441.284579] __schedule+0x3e0/0x870 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284590] schedule+0x36/0x80 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284601] schedule_timeout+0x1d4/0x360 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284613]? ? call_rcu_sched+0x17/0x20 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284626]? ? __percpu_ref_switch_mode+0xd7/0x180 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284635] wait_for_completion+0xb4/0x140 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284644]? ? wake_up_q+0x80/0x80 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284657] exit_aio+0xeb/0x100 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284670] mmput+0x2b/0x130 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284696] vhost_dev_cleanup+0x382/0x3b0 [vhost] Mar 18 11:12:08 proxmox-1 kernel: [1379441.284711] vhost_net_release+0x53/0xb0 [vhost_net] Mar 18 11:12:08 proxmox-1 kernel: [1379441.284721] __fput+0xea/0x220 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284731] ____fput+0xe/0x10 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284742] task_work_run+0x9d/0xc0 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284752] do_exit+0x2f6/0xbd0 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284762]? ? __switch_to_asm+0x41/0x70 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284771]? ? __switch_to_asm+0x41/0x70 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284778]? ? __switch_to_asm+0x35/0x70 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284785]? ? __switch_to_asm+0x41/0x70 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284793]? ? __switch_to_asm+0x35/0x70 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284802] do_group_exit+0x43/0xb0 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284812] get_signal+0x15a/0x7f0 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284825]? ? do_futex+0x7e6/0xd10 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284838] do_signal+0x37/0x710 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284849]? ? blk_finish_plug+0x2c/0x40 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284859]? ? hrtimer_nanosleep+0xd8/0x1f0 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284866]? ? SyS_futex+0x83/0x180 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284878] exit_to_usermode_loop+0x80/0xd0 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284888] do_syscall_64+0x100/0x130 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284899] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284907] RIP: 0033:0x7f6aff09c469 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284912] RSP: 002b:00007f6af33fc638 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca Mar 18 11:12:08 proxmox-1 kernel: [1379441.284922] RAX: fffffffffffffe00 RBX: 000055e6ad1894c8 RCX: 00007f6aff09c469 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284927] RDX: 00000000ffffffff RSI: 0000000000000000 RDI: 000055e6ad1894c8 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284931] RBP: 00000000ffffffff R08: 0000000000000000 R09: 0000000000000000 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284936] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 Mar 18 11:12:08 proxmox-1 kernel: [1379441.284941] R13: 00007ffcf670c33f R14: 00007f6af2bff000 R15: 0000000000000003 Mar 18 11:12:08 proxmox-1 pvestatd[3372]: got timeout From vadim.bulst at bbz.uni-leipzig.de Thu Mar 26 18:28:22 2020 From: vadim.bulst at bbz.uni-leipzig.de (Vadim Bulst) Date: Thu, 26 Mar 2020 18:28:22 +0100 Subject: [PVE-User] Installation aborted Message-ID: Hi PVE users, I'm using PVE for some years now. I got a runnig cluster of 6 machines in version 6.x . Today I was going to add to additional machines. But the installation was aborting on both servers? There are two 10G Nics for the frontend network which will be trunked later on and 56G IB nic for the backend network . For some reason the installation is aborting because of no dhcp lease. See the screenshot : any idea or workaround? Cheers, Vadim -- Vadim Bulst Universit?t Leipzig / URZ 04109 Leipzig, Augustusplatz 10 phone: +49-341-97-33380 mail:vadim.bulst at uni-leipzig.de From m at plus-plus.su Tue Mar 31 12:21:58 2020 From: m at plus-plus.su (Mikhail) Date: Tue, 31 Mar 2020 13:21:58 +0300 Subject: [PVE-User] Problem with QEMU drive-mirror after cancelling VM disk move Message-ID: <79242d65-6e02-3332-1289-f4c15737bea8@plus-plus.su> Hello, On one of our clusters we're seeing issues with VM backup task - the backup task fails with the following: ERROR: Node 'drive-scsi0' is busy: block device is in use by block job: mirror INFO: aborting backup job ERROR: Backup of VM 123 failed - Node 'drive-scsi0' is busy: block device is in use by block job: mirror I tried digging some information and it appears to be that there's "drive-mirror" job in QEMU/KVM for this particular VM is blocking backup process. It is clear to me that this problem started couple weeks ago when we attempted to change VM's disk underlying storage, however the "Move disk" operation was cancelled manually by administrator at the time and backup tasks started failing right after that. I'm not sure whether this is Proxmox issue or QEMU/KVM, but I suppose that stopping and starting VM from within Proxmox will remove this block, however in our case keeping this virtual machine up and running is critical and we should avoid even 1-2 min downtime. The question is how to remove this drive-mirror block online and how to avoid this in the future. # pveversion pve-manager/6.1-7/13e58d5e (running kernel: 5.3.18-2-pve) Thanks! Mikhail. From sivakumar.saravanan.jv.ext at valeo-siemens.com Tue Mar 31 12:35:36 2020 From: sivakumar.saravanan.jv.ext at valeo-siemens.com (Sivakumar SARAVANAN) Date: Tue, 31 Mar 2020 12:35:36 +0200 Subject: [PVE-User] Fwd: Hardware based License are not recognized from VM In-Reply-To: References: Message-ID: Hello, We have deployed the around 20 Proxmox VE in our environment to test the all software compatibility. We are planning to implement the same in our production environment once software compatibility test are passed. please note that we will deploy maximum 6 VM's on each Proxmox VE host. Because we required dedicated PCI/ USB connection to each VM. So we have configured 6 PCI cards on each Hypervisor server. Because there are few licenses are activated on hardware device level, which is connected via USB. The problem is hardware based licenses are not recognized by software tool from VM. But the same USB connection with Laptop works fine. But not from Proxmox VM. The software tool is installed the VM and tool has to recognized license, which is activated on hardware device and device is connected with VM via PCI. Please refer to the attached file, which tells about the physical connection overview. Kindly help us to resolve the issue. what could the reason for not recognized from VM. Mit freundlichen Gr??en / Best regards / Cordialement, Sivakumar SARAVANAN Externer Dienstleister f?r / External service provider for Valeo Siemens eAutomotive Germany GmbH Research & Development R & D SWENG TE 1 INFTE Frauenauracher Stra?e 85 91056 Erlangen, Germany Tel.: +49 9131 9892 0000 Mobile: +49 176 7698 5441 sivakumar.saravanan.jv.ext at valeo-siemens.com valeo-siemens.com Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger Schwab, Peter Geilen; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 *This e-mail message is intended for the internal use of the intended recipient(s) only. The information contained herein is confidential/privileged. Its disclosure or reproduction is strictly prohibited. If you are not the intended recipient, please inform the sender immediately, do not disclose it internally or to third parties and destroy it. In the course of our business relationship and for business purposes only, Valeo may need to process some of your personal data. For more information, please refer to the Valeo Data Protection Statement and Privacy notice available on Valeo.com * -- *This e-mail message is intended for the internal use of the intended recipient(s) only. The information contained herein is confidential/privileged. Its disclosure or reproduction is strictly prohibited. If you are not the intended recipient, please inform the sender immediately, do not disclose it internally or to third parties and destroy it. In the course of our business relationship and for business purposes only, Valeo may need to process some of your personal data. For more information, please refer to the Valeo Data Protection Statement and Privacy notice available on Valeo.com * From f.gruenbichler at proxmox.com Tue Mar 31 13:53:00 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Tue, 31 Mar 2020 13:53:00 +0200 Subject: [PVE-User] Problem with QEMU drive-mirror after cancelling VM disk move In-Reply-To: <79242d65-6e02-3332-1289-f4c15737bea8@plus-plus.su> References: <79242d65-6e02-3332-1289-f4c15737bea8@plus-plus.su> Message-ID: <1585655214.ioujt1t68q.astroid@nora.none> On March 31, 2020 12:21 pm, Mikhail wrote: > Hello, > > On one of our clusters we're seeing issues with VM backup task - the > backup task fails with the following: > > ERROR: Node 'drive-scsi0' is busy: block device is in use by block job: > mirror > INFO: aborting backup job > ERROR: Backup of VM 123 failed - Node 'drive-scsi0' is busy: block > device is in use by block job: mirror > > I tried digging some information and it appears to be that there's > "drive-mirror" job in QEMU/KVM for this particular VM is blocking backup > process. It is clear to me that this problem started couple weeks ago > when we attempted to change VM's disk underlying storage, however the > "Move disk" operation was cancelled manually by administrator at the > time and backup tasks started failing right after that. I'm not sure > whether this is Proxmox issue or QEMU/KVM, but I suppose that stopping > and starting VM from within Proxmox will remove this block, however in > our case keeping this virtual machine up and running is critical and we > should avoid even 1-2 min downtime. yes, shutting the VM down and starting it again gets rid of any leftover block-jobs for sure. > The question is how to remove this drive-mirror block online and how to > avoid this in the future. you should be able to manually clean the messup using the QMP/monitor interface: `man qemu-qmp-ref` gives a detailed tour, you probably want `query-block-jobs` and `query-block`, and then, depending on the output `block-job-cancel` or `block-job-complete`. the HMP interface accessible via 'qm monitor ' has slightly different commands: `info block -v`, `info block-jobs` and `block_job_cancel`/`block_job_complete` ('_' instead of '-'). feel free to post the output of the query/info commands before deciding how to proceed. the complete task log of the failed 'move disk' operation would also be interesting, if it is still available. From pve at junkyard.4t2.com Tue Mar 31 14:19:24 2020 From: pve at junkyard.4t2.com (Tom Weber) Date: Tue, 31 Mar 2020 14:19:24 +0200 Subject: [PVE-User] Fwd: Hardware based License are not recognized from VM In-Reply-To: References: Message-ID: <32522003e03745c8ef861b71e20dc8316344fd91.camel@junkyard.4t2.com> If you're talking about the typical USB Dongle in Hardware as License Key, I'd always try to use an USB Network Device Server and avoid putting the Dongles in the Server. This also gives you the ability to move the VMs around. Regards, Tom Am Dienstag, den 31.03.2020, 12:35 +0200 schrieb Sivakumar SARAVANAN: > Hello, > > We have deployed the around 20 Proxmox VE in our environment to test > the > all software compatibility. We are planning to implement the same in > our > production environment once software compatibility test are passed. > please > note that we will deploy maximum 6 VM's on each Proxmox VE host. > Because we > required dedicated PCI/ USB connection to each VM. So we have > configured 6 > PCI cards on each Hypervisor server. Because there are few licenses > are > activated on hardware device level, which is connected via USB. The > problem > is hardware based licenses are not recognized by software tool from > VM. But > the same USB connection with Laptop works fine. But not from Proxmox > VM. > The software tool is installed the VM and tool has to recognized > license, > which is activated on hardware device and device is connected with VM > via > PCI. > > > Please refer to the attached file, which tells about the physical > connection overview. > > > Kindly help us to resolve the issue. what could the reason for not > recognized from VM. > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > Sivakumar SARAVANAN > > Externer Dienstleister f?r / External service provider for > Valeo Siemens eAutomotive Germany GmbH > Research & Development > R & D SWENG TE 1 INFTE > Frauenauracher Stra?e 85 > 91056 Erlangen, Germany > Tel.: +49 9131 9892 0000 > Mobile: +49 176 7698 5441 > sivakumar.saravanan.jv.ext at valeo-siemens.com > valeo-siemens.com > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > Schwab, > Peter Geilen; Chairman of the Supervisory Board: Hartmut Kl?tzer; > Registered office: Erlangen, Germany; Commercial registry: F?rth, HRB > 15655 > > *This e-mail message is intended for the internal use of the intended > recipient(s) only. > The information contained herein is confidential/privileged. Its > disclosure or reproduction is strictly prohibited. > If you are not the intended recipient, please inform the sender > immediately, do not disclose it internally or to third parties and > destroy it. > > In the course of our business relationship and for business purposes > only, Valeo may need to process some of your personal data. > For more information, please refer to the Valeo Data Protection > Statement and Privacy notice available on Valeo.com > * > From m at plus-plus.su Tue Mar 31 17:07:09 2020 From: m at plus-plus.su (Mikhail) Date: Tue, 31 Mar 2020 18:07:09 +0300 Subject: [PVE-User] Problem with QEMU drive-mirror after cancelling VM disk move In-Reply-To: <1585655214.ioujt1t68q.astroid@nora.none> References: <79242d65-6e02-3332-1289-f4c15737bea8@plus-plus.su> <1585655214.ioujt1t68q.astroid@nora.none> Message-ID: <366d838d-2f4d-acc6-3440-8938e8235215@plus-plus.su> On 3/31/20 2:53 PM, Fabian Gr?nbichler wrote: > you should be able to manually clean the messup using the QMP/monitor > interface: > > `man qemu-qmp-ref` gives a detailed tour, you probably want > `query-block-jobs` and `query-block`, and then, depending on the output > `block-job-cancel` or `block-job-complete`. > > the HMP interface accessible via 'qm monitor ' has slightly > different commands: `info block -v`, `info block-jobs` and > `block_job_cancel`/`block_job_complete` ('_' instead of '-'). Thanks for your prompt response. I've tried the following under VM's "Monitor" section within Proxmox WEB GUI: # info block-jobs Type mirror, device drive-scsi0: Completed 6571425792 of 10725883904 bytes, speed limit 0 bytes/s and after that I tried to cancel this block job using: # block_job_cancel -f drive-scsi0 However, the block job is still there even after 3 attempts trying to cancel it: # info block-jobs Type mirror, device drive-scsi0: Completed 6571425792 of 10725883904 bytes, speed limit 0 bytes/s Same happens when I connect to it via root console using "qm monitor". I guess this is now completely stuck and the only way would be to power off/on the VM? > > feel free to post the output of the query/info commands before deciding > how to proceed. the complete task log of the failed 'move disk' > operation would also be interesting, if it is still available. I just asked my colleague who was cancelling this Disk move operation. He said he had to cancel it because it was stuck at 61.27%. The Disk move task log is below, I truncated repeating lines: deprecated setting 'migration_unsecure' and new 'migration: type' set at same time! Ignore 'migration_unsecure' create full clone of drive scsi0 (nvme-local-vm:123/vm-123-disk-0.qcow2) drive mirror is starting for drive-scsi0 drive-scsi0: transferred: 24117248 bytes remaining: 10713300992 bytes total: 10737418240 bytes progression: 0.22 % busy: 1 ready: 0 drive-scsi0: transferred: 2452619264 bytes remaining: 6635388928 bytes total: 9088008192 bytes progression: 26.99 % busy: 1 ready: 0 drive-scsi0: transferred: 3203399680 bytes remaining: 6643777536 bytes total: 9847177216 bytes progression: 32.53 % busy: 1 ready: 0 drive-scsi0: transferred: 4001366016 bytes remaining: 6632243200 bytes total: 10633609216 bytes progression: 37.63 % busy: 1 ready: 0 drive-scsi0: transferred: 4881121280 bytes remaining: 5856296960 bytes total: 10737418240 bytes progression: 45.46 % busy: 1 ready: 0 drive-scsi0: transferred: 6554648576 bytes remaining: 4171235328 bytes total: 10725883904 bytes progression: 61.11 % busy: 1 ready: 0 drive-scsi0: transferred: 6571425792 bytes remaining: 4154458112 bytes total: 10725883904 bytes progression: 61.27 % busy: 1 ready: 0 [ same line repeats like 250+ times ] drive-scsi0: transferred: 6571425792 bytes remaining: 4154458112 bytes total: 10725883904 bytes progression: 61.27 % busy: 1 ready: 0 drive-scsi0: transferred: 6571425792 bytes remaining: 4154458112 bytes total: 10725883904 bytes progression: 61.27 % busy: 1 ready: 0 drive-scsi0: transferred: 6571425792 bytes remaining: 4154458112 bytes total: 10725883904 bytes progression: 61.27 % busy: 1 ready: 0 drive-scsi0: Cancelling block job regards, Mikhail. From sivakumar.saravanan.jv.ext at valeo-siemens.com Tue Mar 31 18:21:44 2020 From: sivakumar.saravanan.jv.ext at valeo-siemens.com (Sivakumar SARAVANAN) Date: Tue, 31 Mar 2020 18:21:44 +0200 Subject: [PVE-User] Fwd: Hardware based License are not recognized from VM In-Reply-To: <32522003e03745c8ef861b71e20dc8316344fd91.camel@junkyard.4t2.com> References: <32522003e03745c8ef861b71e20dc8316344fd91.camel@junkyard.4t2.com> Message-ID: Hello Tom, Kindly find the below connectivity overview. The current setup is as below >From Windows 10 laptop => USB Hub connected with laptop = all required hardware are connected with USB Hub (License are activated directly on hardware ( Not Dongle)) and now, we are trying to implement the same setup on Virtual Machine using Proxmox VE as blow. But licenses are not recognized from VM. As per the hardware vendor, we have to connect the hardware directly with server and not through network connected device. Proxmox VE => USB HuB => from USB hub there are few physical Hardware devices are connected Mit freundlichen Gr??en / Best regards / Cordialement, Sivakumar SARAVANAN Externer Dienstleister f?r / External service provider for Valeo Siemens eAutomotive Germany GmbH Research & Development R & D SWENG TE 1 INFTE Frauenauracher Stra?e 85 91056 Erlangen, Germany Tel.: +49 9131 9892 0000 Mobile: +49 176 7698 5441 sivakumar.saravanan.jv.ext at valeo-siemens.com valeo-siemens.com Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger Schwab, Michael Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 On Tue, Mar 31, 2020 at 2:20 PM Tom Weber wrote: > If you're talking about the typical USB Dongle in Hardware as License > Key, I'd always try to use an USB Network Device Server and avoid > putting the Dongles in the Server. > > This also gives you the ability to move the VMs around. > > Regards, > Tom > > Am Dienstag, den 31.03.2020, 12:35 +0200 schrieb Sivakumar SARAVANAN: > > Hello, > > > > We have deployed the around 20 Proxmox VE in our environment to test > > the > > all software compatibility. We are planning to implement the same in > > our > > production environment once software compatibility test are passed. > > please > > note that we will deploy maximum 6 VM's on each Proxmox VE host. > > Because we > > required dedicated PCI/ USB connection to each VM. So we have > > configured 6 > > PCI cards on each Hypervisor server. Because there are few licenses > > are > > activated on hardware device level, which is connected via USB. The > > problem > > is hardware based licenses are not recognized by software tool from > > VM. But > > the same USB connection with Laptop works fine. But not from Proxmox > > VM. > > The software tool is installed the VM and tool has to recognized > > license, > > which is activated on hardware device and device is connected with VM > > via > > PCI. > > > > > > Please refer to the attached file, which tells about the physical > > connection overview. > > > > > > Kindly help us to resolve the issue. what could the reason for not > > recognized from VM. > > > > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > > > Sivakumar SARAVANAN > > > > Externer Dienstleister f?r / External service provider for > > Valeo Siemens eAutomotive Germany GmbH > > Research & Development > > R & D SWENG TE 1 INFTE > > Frauenauracher Stra?e 85 > > 91056 Erlangen, Germany > > Tel.: +49 9131 9892 0000 > > Mobile: +49 176 7698 5441 > > sivakumar.saravanan.jv.ext at valeo-siemens.com > > valeo-siemens.com > > > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > > Schwab, > > Peter Geilen; Chairman of the Supervisory Board: Hartmut Kl?tzer; > > Registered office: Erlangen, Germany; Commercial registry: F?rth, HRB > > 15655 > > > > *This e-mail message is intended for the internal use of the intended > > recipient(s) only. > > The information contained herein is confidential/privileged. Its > > disclosure or reproduction is strictly prohibited. > > If you are not the intended recipient, please inform the sender > > immediately, do not disclose it internally or to third parties and > > destroy it. > > > > In the course of our business relationship and for business purposes > > only, Valeo may need to process some of your personal data. > > For more information, please refer to the Valeo Data Protection > > Statement and Privacy notice available on Valeo.com > > * > > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- *This e-mail message is intended for the internal use of the intended recipient(s) only. The information contained herein is confidential/privileged. Its disclosure or reproduction is strictly prohibited. If you are not the intended recipient, please inform the sender immediately, do not disclose it internally or to third parties and destroy it. In the course of our business relationship and for business purposes only, Valeo may need to process some of your personal data. For more information, please refer to the Valeo Data Protection Statement and Privacy notice available on Valeo.com * From gianni.milo22 at gmail.com Tue Mar 31 20:30:38 2020 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Tue, 31 Mar 2020 19:30:38 +0100 Subject: [PVE-User] Fwd: Hardware based License are not recognized from VM In-Reply-To: References: <32522003e03745c8ef861b71e20dc8316344fd91.camel@junkyard.4t2.com> Message-ID: Everything related to pci and usb passthrough can be found in the wiki. Have a look and see if you find anything useful in there. Good luck... https://pve.proxmox.com/wiki/Pci_passthrough https://pve.proxmox.com/wiki/USB_Devices_in_Virtual_Machines On Tue, 31 Mar 2020 at 17:22, Sivakumar SARAVANAN < sivakumar.saravanan.jv.ext at valeo-siemens.com> wrote: > Hello Tom, > > Kindly find the below connectivity overview. > > The current setup is as below > > From Windows 10 laptop => USB Hub connected with laptop = all required > hardware are connected with USB Hub (License are activated directly on > hardware ( Not Dongle)) > > and now, we are trying to implement the same setup on Virtual Machine using > Proxmox VE as blow. But licenses are not recognized from VM. As per the > hardware vendor, we have to connect the hardware directly with server and > not through network connected device. > > Proxmox VE => USB HuB => from USB hub there are few physical Hardware > devices are connected > > > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > Sivakumar SARAVANAN > > Externer Dienstleister f?r / External service provider for > Valeo Siemens eAutomotive Germany GmbH > Research & Development > R & D SWENG TE 1 INFTE > Frauenauracher Stra?e 85 > 91056 Erlangen, Germany > Tel.: +49 9131 9892 0000 > Mobile: +49 176 7698 5441 > sivakumar.saravanan.jv.ext at valeo-siemens.com > valeo-siemens.com > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > Schwab, Michael > Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered > office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 > > > On Tue, Mar 31, 2020 at 2:20 PM Tom Weber wrote: > > > If you're talking about the typical USB Dongle in Hardware as License > > Key, I'd always try to use an USB Network Device Server and avoid > > putting the Dongles in the Server. > > > > This also gives you the ability to move the VMs around. > > > > Regards, > > Tom > > > > Am Dienstag, den 31.03.2020, 12:35 +0200 schrieb Sivakumar SARAVANAN: > > > Hello, > > > > > > We have deployed the around 20 Proxmox VE in our environment to test > > > the > > > all software compatibility. We are planning to implement the same in > > > our > > > production environment once software compatibility test are passed. > > > please > > > note that we will deploy maximum 6 VM's on each Proxmox VE host. > > > Because we > > > required dedicated PCI/ USB connection to each VM. So we have > > > configured 6 > > > PCI cards on each Hypervisor server. Because there are few licenses > > > are > > > activated on hardware device level, which is connected via USB. The > > > problem > > > is hardware based licenses are not recognized by software tool from > > > VM. But > > > the same USB connection with Laptop works fine. But not from Proxmox > > > VM. > > > The software tool is installed the VM and tool has to recognized > > > license, > > > which is activated on hardware device and device is connected with VM > > > via > > > PCI. > > > > > > > > > Please refer to the attached file, which tells about the physical > > > connection overview. > > > > > > > > > Kindly help us to resolve the issue. what could the reason for not > > > recognized from VM. > > > > > > > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > > > > > Sivakumar SARAVANAN > > > > > > Externer Dienstleister f?r / External service provider for > > > Valeo Siemens eAutomotive Germany GmbH > > > Research & Development > > > R & D SWENG TE 1 INFTE > > > Frauenauracher Stra?e 85 > > > 91056 Erlangen, Germany > > > Tel.: +49 9131 9892 0000 > > > Mobile: +49 176 7698 5441 > > > sivakumar.saravanan.jv.ext at valeo-siemens.com > > > valeo-siemens.com > > > > > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > > > Schwab, > > > Peter Geilen; Chairman of the Supervisory Board: Hartmut Kl?tzer; > > > Registered office: Erlangen, Germany; Commercial registry: F?rth, HRB > > > 15655 > > > > > > *This e-mail message is intended for the internal use of the intended > > > recipient(s) only. > > > The information contained herein is confidential/privileged. Its > > > disclosure or reproduction is strictly prohibited. > > > If you are not the intended recipient, please inform the sender > > > immediately, do not disclose it internally or to third parties and > > > destroy it. > > > > > > In the course of our business relationship and for business purposes > > > only, Valeo may need to process some of your personal data. > > > For more information, please refer to the Valeo Data Protection > > > Statement and Privacy notice available on Valeo.com > > > * > > > > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > -- > *This e-mail message is intended for the internal use of the intended > recipient(s) only. > The information contained herein is > confidential/privileged. Its disclosure or reproduction is strictly > prohibited. > If you are not the intended recipient, please inform the sender > immediately, do not disclose it internally or to third parties and destroy > it. > > In the course of our business relationship and for business purposes > only, Valeo may need to process some of your personal data. > For more > information, please refer to the Valeo Data Protection Statement and > Privacy notice available on Valeo.com > * > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >