[PVE-User] proxmox-restore - performance issues

Gregor Burck gregor at aeppelbroe.de
Fri Oct 1 11:00:37 CEST 2021


And more information:

I test an fio job, I got the settings not by my own instaed someone  
from the forum give me this for testing ZFS:


root at ph-pbs:~# fio --name=typical-vm --size=8G --rw=readwrite  
--rwmixread=69 --direct=1 --bs=4K --numjobs=4 --ioengine=libaio  
--iodepth=12 --group_reporting --runtime=20m --time_based^C
root at ph-pbs:~# cd /ZFSPOOL/
BACKUPSTORE001/ VMSTORE001/
root at ph-pbs:~# cd /ZFSPOOL/VMSTORE001/
root at ph-pbs:/ZFSPOOL/VMSTORE001# fio --name=typical-vm --size=8G  
--rw=readwrite --rwmixread=69 --direct=1 --bs=4K --numjobs=4  
--ioengine=libaio --iodepth=12 --group_reporting --runtime=20m  
--time_based
typical-vm: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)  
4096B-4096B, ioengine=libaio, iodepth=12
...
fio-3.25
Starting 4 processes
typical-vm: Laying out IO file (1 file / 8192MiB)
typical-vm: Laying out IO file (1 file / 8192MiB)
typical-vm: Laying out IO file (1 file / 8192MiB)
typical-vm: Laying out IO file (1 file / 8192MiB)
Jobs: 4 (f=0): [f(4)][100.0%][r=1518MiB/s,w=682MiB/s][r=389k,w=175k  
IOPS][eta 00m:00s]
typical-vm: (groupid=0, jobs=4): err= 0: pid=3804786: Fri Oct  1 10:56:30 2021
   read: IOPS=356k, BW=1392MiB/s (1460MB/s)(1631GiB/1200001msec)
     slat (nsec): min=1854, max=176589k, avg=5156.08, stdev=39010.68
     clat (usec): min=4, max=191637, avg=85.89, stdev=133.21
      lat (usec): min=32, max=191640, avg=91.13, stdev=139.42
     clat percentiles (usec):
      |  1.00th=[   42],  5.00th=[   46], 10.00th=[   49], 20.00th=[   52],
      | 30.00th=[   56], 40.00th=[   59], 50.00th=[   65], 60.00th=[   85],
      | 70.00th=[   97], 80.00th=[  111], 90.00th=[  141], 95.00th=[  176],
      | 99.00th=[  265], 99.50th=[  318], 99.90th=[  570], 99.95th=[  693],
      | 99.99th=[ 1090]
    bw (  MiB/s): min=  250, max= 2159, per=100.00%, avg=1392.77,  
stdev=63.78, samples=9596
    iops        : min=64218, max=552858, avg=356548.75,  
stdev=16328.20, samples=9596
   write: IOPS=160k, BW=626MiB/s (656MB/s)(733GiB/1200001msec); 0 zone resets
     slat (usec): min=3, max=191425, avg= 9.71, stdev=34.41
     clat (usec): min=2, max=191641, avg=86.02, stdev=137.32
      lat (usec): min=35, max=191650, avg=95.85, stdev=144.10
     clat percentiles (usec):
      |  1.00th=[   42],  5.00th=[   46], 10.00th=[   49], 20.00th=[   52],
      | 30.00th=[   56], 40.00th=[   59], 50.00th=[   65], 60.00th=[   85],
      | 70.00th=[   98], 80.00th=[  111], 90.00th=[  141], 95.00th=[  178],
      | 99.00th=[  265], 99.50th=[  318], 99.90th=[  578], 99.95th=[  701],
      | 99.99th=[ 1106]
    bw (  KiB/s): min=114464, max=995856, per=100.00%, avg=640817.51,  
stdev=29342.79, samples=9596
    iops        : min=28616, max=248964, avg=160204.26, stdev=7335.70,  
samples=9596
   lat (usec)   : 4=0.01%, 10=0.01%, 50=13.69%, 100=58.80%, 250=26.29%
   lat (usec)   : 500=1.08%, 750=0.10%, 1000=0.02%
   lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
   lat (msec)   : 100=0.01%, 250=0.01%
   cpu          : usr=18.17%, sys=79.17%, ctx=982498, majf=10, minf=2977
   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,  
 >=64=0.0%
      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,  
 >=64=0.0%
      issued rwts: total=427672030,192161509,0,0 short=0,0,0,0 dropped=0,0,0,0
      latency   : target=0, window=0, percentile=100.00%, depth=12

Run status group 0 (all jobs):
    READ: bw=1392MiB/s (1460MB/s), 1392MiB/s-1392MiB/s  
(1460MB/s-1460MB/s), io=1631GiB (1752GB), run=1200001-1200001msec
   WRITE: bw=626MiB/s (656MB/s), 626MiB/s-626MiB/s (656MB/s-656MB/s),  
io=733GiB (787GB), run=1200001-1200001msec


And this is while two of the restore jobs still running.





More information about the pve-user mailing list