[PVE-User] Very slow server (RAW IDE discs and KVM)

Erik van Ast Erik.van.Ast at suzohapp.nl
Fri Aug 12 10:35:43 CEST 2011


Hi all,

We use the local discs to store the VM's on and we don't use the backup option anymore (since 2 days). I restarted the Proxmox system 2 times after removing the backup jobs en when we were using backups, we have put them on a remote NFS share...
I have searched for " ps -afx | grep vzdump" as Muhammad asked for and this is the result (at the moment the slow machine is copying files over):

proxmox:/var/lib/vz/images/102#
proxmox:/var/lib/vz/images/102# ps -afx | grep vzdump
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
15397 pts/0    S+     0:00  |       \_ grep vzdump
proxmox:/var/lib/vz/images/102# ps -afx
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
  PID TTY      STAT   TIME COMMAND
    2 ?        S      0:00 [kthreadd]
    3 ?        S      0:00  \_ [migration/0]
    4 ?        S      0:00  \_ [ksoftirqd/0]
    5 ?        S      0:00  \_ [watchdog/0]
    6 ?        S      0:00  \_ [migration/1]
    7 ?        S      0:01  \_ [ksoftirqd/1]
    8 ?        S      0:00  \_ [watchdog/1]
    9 ?        S      0:00  \_ [migration/2]
   10 ?        S      0:00  \_ [ksoftirqd/2]
   11 ?        S      0:00  \_ [watchdog/2]
   12 ?        S      0:00  \_ [migration/3]
   13 ?        S      0:00  \_ [ksoftirqd/3]
   14 ?        S      0:00  \_ [watchdog/3]
   15 ?        S      0:00  \_ [events/0]
   16 ?        S      0:00  \_ [events/1]
   17 ?        S      0:00  \_ [events/2]
   18 ?        S      0:00  \_ [events/3]
   19 ?        S      0:00  \_ [khelper]
   20 ?        S      0:00  \_ [netns]
   21 ?        S      0:00  \_ [async/mgr]
   22 ?        S      0:00  \_ [pm]
   23 ?        S      0:00  \_ [sync_supers]
   24 ?        S      0:00  \_ [bdi-default]
   25 ?        S      0:00  \_ [kintegrityd/0]
   26 ?        S      0:00  \_ [kintegrityd/1]
   27 ?        S      0:00  \_ [kintegrityd/2]
   28 ?        S      0:00  \_ [kintegrityd/3]
   29 ?        S      0:00  \_ [kblockd/0]
   30 ?        S      0:00  \_ [kblockd/1]
   31 ?        S      0:00  \_ [kblockd/2]
   32 ?        S      0:00  \_ [kblockd/3]
   33 ?        S      0:00  \_ [kacpid]
   34 ?        S      0:00  \_ [kacpi_notify]
   35 ?        S      0:00  \_ [kacpi_hotplug]
   36 ?        S      0:00  \_ [kseriod]
   41 ?        S      0:00  \_ [kondemand/0]
   42 ?        S      0:00  \_ [kondemand/1]
   43 ?        S      0:00  \_ [kondemand/2]
   44 ?        S      0:00  \_ [kondemand/3]
   45 ?        S      0:00  \_ [ubstatd]
   46 ?        S      0:00  \_ [khungtaskd]
   47 ?        S      0:00  \_ [kswapd0]
   48 ?        S      0:00  \_ [aio/0]
   49 ?        S      0:00  \_ [aio/1]
   50 ?        S      0:00  \_ [aio/2]
   51 ?        S      0:00  \_ [aio/3]
   52 ?        S      0:00  \_ [crypto/0]
   53 ?        S      0:00  \_ [crypto/1]
   54 ?        S      0:00  \_ [crypto/2]
   55 ?        S      0:00  \_ [crypto/3]
   58 ?        S      0:00  \_ [kstriped]
  385 ?        S      0:00  \_ [ata/0]
  393 ?        S      0:00  \_ [ksuspend_usbd]
  401 ?        S      0:00  \_ [mpt_poll_0]
  403 ?        S      0:00  \_ [ata/1]
  420 ?        S      0:00  \_ [khubd]
  421 ?        S      0:00  \_ [mpt/0]
  426 ?        S      0:00  \_ [ata/2]
  442 ?        S      0:00  \_ [ata/3]
  460 ?        S      0:00  \_ [ata_aux]
  483 ?        S      0:00  \_ [scsi_eh_0]
  486 ?        S      0:00  \_ [scsi_eh_1]
  503 ?        S      0:00  \_ [scsi_eh_2]
  525 ?        S      0:00  \_ [scsi_eh_3]
  603 ?        S      0:00  \_ [scsi_eh_4]
  604 ?        S      0:00  \_ [scsi_eh_5]
  648 ?        S      0:00  \_ [scsi_eh_6]
  742 ?        S      0:00  \_ [ksnapd]
  759 ?        S      0:00  \_ [kdmflush]
  765 ?        S      0:00  \_ [kdmflush]
  771 ?        S      0:00  \_ [kdmflush]
  802 ?        S      0:01  \_ [kjournald]
 1189 ?        S      0:00  \_ [edac-poller]
 1382 ?        S      0:00  \_ [kpsmoused]
 1700 ?        S      0:00  \_ [flush-254:1]
 1752 ?        S      0:01  \_ [kjournald]
 1753 ?        S      0:00  \_ [kjournald]
 1879 ?        S<     0:00  \_ [kslowd000]
 1880 ?        S<     0:00  \_ [kslowd001]
 1929 ?        S      0:00  \_ [iscsi_eh]
 1942 ?        S      0:00  \_ [ib_addr]
 1947 ?        S      0:00  \_ [ib_mcast]
 1949 ?        S      0:00  \_ [iw_cm_wq]
 1951 ?        S      0:00  \_ [ib_cm/0]
 1952 ?        S      0:00  \_ [ib_cm/1]
 1953 ?        S      0:00  \_ [ib_cm/2]
 1954 ?        S      0:00  \_ [ib_cm/3]
 1957 ?        S      0:00  \_ [rdma_cm]
 2289 ?        S      0:00  \_ [vzmond]
 2368 ?        S      0:00  \_ [kvm-irqfd-clean]
 2545 ?        S      0:00  \_ [flush-254:2]
 2705 ?        S      0:00  \_ [rpciod/0]
 2706 ?        S      0:00  \_ [rpciod/1]
 2707 ?        S      0:00  \_ [rpciod/2]
 2708 ?        S      0:00  \_ [rpciod/3]
 2716 ?        S      0:00  \_ [nfsiod]
 2723 ?        S      0:00  \_ [lockd]
    1 ?        Ss     0:00 init [2]
  882 ?        S<s    0:00 udevd --daemon
 1903 ?        Ss     0:00 /sbin/portmap
 1914 ?        Ss     0:00 /sbin/rpc.statd
 1962 ?        Ss     0:00 /usr/sbin/iscsid
 1963 ?        S<Ls   0:01 /usr/sbin/iscsid
 2079 ?        Sl     0:01 /usr/sbin/rsyslogd -c3
 2093 ?        Ss     0:00 /usr/sbin/sshd
 2516 ?        Ss     0:00  \_ sshd: root at pts/0
 2518 pts/0    Ss     0:00  |   \_ -bash
15401 pts/0    R+     0:00  |       \_ ps -afx
 2922 ?        Ss     0:00  \_ sshd: root at notty
 2937 ?        Ss     0:00      \_ bash -c (cd /tmp && perl veeam_soap78b6422c-ac60-496a-9180-a9b72d403392.pl -d -c -l lib78b6422c-ac60-496a-9180-a9b72d40339
 2938 ?        S      0:00          \_ bash -c (cd /tmp && perl veeam_soap78b6422c-ac60-496a-9180-a9b72d403392.pl -d -c -l lib78b6422c-ac60-496a-9180-a9b72d4
 2939 ?        S      0:01              \_ perl veeam_soap78b6422c-ac60-496a-9180-a9b72d403392.pl -d -c -l lib78b6422c-ac60-496a-9180-a9b72d403392 -e /tmp/ve
 2173 ?        Ss     0:01 /usr/lib/postfix/master
 2190 ?        Ss     0:00 /usr/sbin/vzeventd
 2197 ?        Ss     0:00 pvedaemon worker
15167 ?        S      0:03  \_ pvedaemon worker
15180 ?        S      0:02  \_ pvedaemon worker
 2355 ?        Ss     0:01 /usr/sbin/ntpd -p /var/run/ntpd.pid -u 102:106 -g
 2387 ?        Ss     0:00 /usr/sbin/atd
 2407 ?        Ss     0:00 /usr/sbin/cron
 2430 ?        Ss     0:03 /usr/sbin/apache2 -k start
15312 ?        S      0:00  \_ /usr/sbin/apache2 -k start
 2457 tty1     Ss+    0:00 /sbin/getty 38400 tty1
 2458 tty2     Ss+    0:00 /sbin/getty 38400 tty2
 2459 tty3     Ss+    0:00 /sbin/getty 38400 tty3
 2460 tty4     Ss+    0:00 /sbin/getty 38400 tty4
 2461 tty5     Ss+    0:00 /sbin/getty 38400 tty5
 2462 tty6     Ss+    0:00 /sbin/getty 38400 tty6
 2533 ?        Sl    79:26 /usr/bin/kvm -monitor unix:/var/run/qemu-server/106.mon,server,nowait -vnc unix:/var/run/qemu-server/106.vnc,password -pidfile /va
 2743 ?        Sl    49:44 /usr/bin/kvm -monitor unix:/var/run/qemu-server/102.mon,server,nowait -vnc unix:/var/run/qemu-server/102.vnc,password -pidfile /va
 2751 ?        Ss     0:00 pvedaemon worker
 2752 ?        S      0:00  \_ sh -c /usr/sbin/qm vncproxy 102 sOL0JthU8bwNjoKLcmoo3tUcs0Y 2>/dev/null
 2753 ?        S      0:06      \_ /usr/bin/perl -w /usr/sbin/qm vncproxy 102 sOL0JthU8bwNjoKLcmoo3tUcs0Y
13615 ?        S<L    0:00 /usr/bin/atop -a -w /var/log/atop.log 600
proxmox:/var/lib/vz/images/102#



Kind regards / Mit freundlichem Gruß,
Erik 

-----------------------------

Message: 2
Date: Thu, 11 Aug 2011 14:45:30 +0500
From: Muhammad Yousuf Khan <sirtcp at gmail.com>
To: Erik van Ast <Erik.van.Ast at suzohapp.nl>
Cc: pve-user at pve.proxmox.com
Subject: Re: [PVE-User] Very slow server (RAW IDE discs and KVM)
Message-ID:
	<CAGWVfM=T0EKKBHYrJhgP0T9ZbhsxjVMDZknGBNZYOxRBL97XfQ at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

perhaps i got the catch here actually you might be backing up 180 GB of data as you said your VM is 180GB in size. so backup up that amount of data will kill your IO and chock the bandwidth. Heavy (Big virtual HD) machines as VMs are not recommended since backup should be the problem. even-though you are running the whole system on a Desktop machine. i am also running one machine with the same config as yours in production. my machine contains 7 VMs of different OS flavor Linux and windows. however all of them are lesser then 25 GB. i am running this whole bunch for about more then 3 months and never had the performance issue, by limiting the bandwidth and using less GBs for Virtual Hard disks.

u can limit the bandwidth by following the instructions below..

Create the /etc/vzdump.conf file with the following content:

bwlimit: 5000

Now Fileserver part. You can use separate hardware machine for the fileserver if you wanted to use VM just in any case then try to use NAS solutions. there are few very good popular and free NAS solutions out there like FreeNAS and Openfiler. i am using Openfiler you can always Attach NFS or ISCSI to your VMs as a native facility of those NAS boxes. although ISCSI is not recommended by me if you are using desktop machine as a NAS BOX because you will not reach throughput of your actual LAN. anyways it is out of topic. i hope this helps



Thank you,


------------------------------

Message: 2
Date: Thu, 11 Aug 2011 17:18:15 +0200
From: Fr?d?ric Massot  <frederic at juliana-multimedia.com>
To: pve-user at pve.proxmox.com
Subject: Re: [PVE-User] Very slow server (RAW IDE discs and KVM)
Message-ID: <4E43F2B7.8060803 at juliana-multimedia.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Le 11/08/2011 14:08, Erik van Ast a ?crit :
> Hi all,
>
> I removed the machines that were creating problems and I also deleted 
> the backup job. The system is still very slow when creating a new VM, 
> but I still can't figure out why.
>
[...]
>
> When creating a new 2003 machine (102) which is "installing devices"
> at the moment of this screencapture. Installing devices is normally 
> not very intensive to do, but the disc has been between 90-99% all the 
> time when installing this new machine : proxmox:~# atop ATOP -
> proxmox            2011/08/11  13:52:01               10 seconds
> elapsed PRC | sys   0.35s | user   0.83s | #proc    133 | #zombie
> 0 | #exit      0 | CPU | sys      2% | user      4% | irq       0% |
> idle    274% | wait    121% |


In my opinion, the value of wait indicates that your storage system is too slow. What is your storage system (Disk SAS, SATA, RAID, NAS...) ?



Regards.
-- 
==============================================
|              FR?D?RIC MASSOT               |
|     http://www.juliana-multimedia.com      |
|   mailto:frederic at juliana-multimedia.com   |
===========================Debian=GNU/Linux===


------------------------------

Message: 3
Date: Thu, 11 Aug 2011 21:07:32 +0500
From: Muhammad Yousuf Khan <sirtcp at gmail.com>
To: Erik van Ast <Erik.van.Ast at suzohapp.nl>
Cc: pve-user at pve.proxmox.com
Subject: Re: [PVE-User] Very slow server (RAW IDE discs and KVM)
Message-ID:
	<CAGWVfMk0a=JDdDUCoacoMN3MjpmoJXeCWqJ1fcg933XatRAOog at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

i agree with Fr?d?ric Massot  this can be storage issue as well, if you are
using software RAID there are chances that you going to increase your IO.

find a process by command  "ps -afx | grep vzdump"  kill the process. by
"kill XXXX" (where XXX is process ID in numbers)

if i were you i would have restarted the system Proxmox Machine :).... to
reset all the backup jobs if running.

use windows share or NFS share as backup storage. don't store backup files
on local hard disk (in my opinion).
____________________________________________



More information about the pve-user mailing list