[pve-devel] pve-2.3: openvz performs horribly

Michael Rasmussen mir at datanom.net
Mon Apr 8 00:22:29 CEST 2013


Hi all,

After upgrading to 2.3 I have experienced very poor performance on
openvz using shared storage on NFS. I have an owncloud server which
before upgrading only needed milliseconds to load any page and no
loading any pages takes more than 30 seconds! I have tried any
optimization setting in the book so I a have reached a grinding halt:-)

To eliminate storage problems I created a clone of this server but
installed it in a KVM server instead on the same node. Every setting,
configuration, ressource, and storage are exactly identical so the only
difference is the clone is running in KVM while the original is running
OpenVZ.

The cloned server performs exactly like the OpenVZ server did before
upgrading to 2.3 so something has to be broken in the OpenVZ
implementation in kernel 2.6.32-19-pve.

Some background info of the node:
# free
             total       used       free     shared    buffers    cached
Mem:       8136076    2276328    5859748          0      18460    106792
-/+ buffers/cache:    2151076    5985000
Swap:      7811064      61776    7749288

# pvectl list
      VMID NAME                 STATUS     MEM(MB)    DISK(GB)    
       101 repo                 running    512        1.00        
       105 www                  stopped    1024       15.00       
       106 wheezy               stopped    512        4.00        
       112 owncloud             running    1024       30.00       
# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
       107 nas-test             stopped    256                4.00 0
       115 balder               running    512               20.00 27819
       121 pve-storage2         stopped    1024               4.00 0
       124 debian7-rc1          stopped    1024              16.00 0
       127 mint-debian          stopped    1024              20.00 0
       128 owncloud1            running    1024              30.00
112832    

# pveperf 
CPU BOGOMIPS:      23998.08
REGEX/SECOND:      1112048
HD SIZE:           3.66 GB (/dev/mapper/pve-root)
BUFFERED READS:    69.55 MB/sec
AVERAGE SEEK TIME: 8.05 ms
FSYNCS/SECOND:     668.49
DNS EXT:           60.14 ms
DNS INT:           1.25 ms (datanom.net)

cat /proc/cpuinfo 
processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 16
model		: 5
model name	: AMD Athlon(tm) II X4 640 Processor
stepping	: 3
cpu MHz		: 2999.760
cache size	: 512 KB
physical id	: 0
siblings	: 4
core id		: 0
cpu cores	: 4
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 5
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good
nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm
extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit
wdt nodeid_msr npt lbrv svm_lock nrip_save
bogomips	: 5999.52
TLB size	: 1024 4K pages
clflush size	: 64
cache_alignment	: 64
address sizes	: 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate

......


Anything particularly I should dig into?

If anybody of you want to have a closer look I can provide full access
to both owncloud and the shell vis ssh on both 'VM's'?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael <at> rasmussen <dot> cc
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
mir <at> datanom <dot> net
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
mir <at> miras <dot> org
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
--------------------------------------------------------------
Experience is what you get when you were expecting something else.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20130408/182a72d0/attachment.sig>


More information about the pve-devel mailing list