[PVE-User] Proxmox/Ceph problem with Broadcom SAS3008?

Mathieu GAUTHIER-LAFAYE mathieu.gauthier-lafaye at lapth.cnrs.fr
Thu Apr 20 17:01:13 CEST 2017


Hi Tobias,

I have try to switch from ext4 to xfs few month ago because CEPH has removed support for ext4... However, I've noticed a high increase of the latency. It was clearly perceptible on basic use of a virtual machine. So, i've switched back to ext4.

Mathieu

----- Original Message -----
From: "Tobias Kropf" <mail at tobiaskropf.de>
To: pve-user at pve.proxmox.com
Sent: Thursday, 20 April, 2017 13:02:06
Subject: [PVE-User] Proxmox/Ceph problem with Broadcom SAS3008?

Hi @ list

We have a new Supermirco Server with SAS HBA AOC-S3008L-L8e.

http://www.supermicro.com/products/accessories/addon/AOC-S3008L-L8e.cfm

Now the pve is in the cluster und ceph is installed... the OSD have a 
havey Latency... I removed the OSD from this server from the root 
section on CRUSH MAP:

-13 66.48984 host pve07
  44  7.26999     osd.44          up  1.00000          1.00000
  52  7.26999     osd.52          up  1.00000          1.00000
  50  7.26999     osd.50          up  1.00000          1.00000
  48  7.26999     osd.48          up  1.00000          1.00000
  40  3.62999     osd.40          up  1.00000          1.00000
  42  7.26999     osd.42          up  1.00000          1.00000
  36  3.62999     osd.36          up  1.00000          1.00000
  34  3.62999     osd.34          up  1.00000          1.00000
  46  7.26999     osd.46          up  1.00000          1.00000
  54  7.26999     osd.54          up  1.00000          1.00000
  28  0.35999     osd.28          up  1.00000          1.00000
  30  0.35999     osd.30          up  1.00000          1.00000
  32  0.35999     osd.32          up  1.00000          1.00000
  38  3.62999     osd.38          up  1.00000          1.00000
  -7 66.48984 host pve06
  33  3.62999     osd.33          up  1.00000          1.00000
  35  3.62999     osd.35          up  1.00000          1.00000
  37  3.62999     osd.37          up  1.00000          1.00000
  39  3.62999     osd.39          up  1.00000          1.00000
  41  7.26999     osd.41          up  1.00000          1.00000
  43  7.26999     osd.43          up  1.00000          1.00000
  45  7.26999     osd.45          up  1.00000          1.00000
  47  7.26999     osd.47          up  1.00000          1.00000
  49  7.26999     osd.49          up  1.00000          1.00000
  51  7.26999     osd.51          up  1.00000          1.00000
  53  7.26999     osd.53          up  1.00000          1.00000
  27  0.35999     osd.27          up  1.00000          1.00000
  29  0.35999     osd.29          up  1.00000          1.00000
  31  0.35999     osd.31          up  1.00000          1.00000
  -1 32.72995 root default
  -2  6.54999     host pve01
   0  1.64000         osd.0       up  1.00000          1.00000
   3  1.64000         osd.3       up  1.00000          1.00000
  17  1.09000         osd.17      up  1.00000          1.00000
  18  1.09000         osd.18      up  1.00000          1.00000
  19  1.09000         osd.19      up  1.00000          1.00000
  -3  6.54999     host pve02

The OSDs have nothing to do but all have high latency:


27                     0                    7
  28                     0                    8
  29                     1                   10
  30                     0                    6
  31                     0                    6
  32                     0                    8
  33                     6                  105
  34                    22                   43
  35                     0                    6
  36                    21                   48
  37                     0                   77
  38                    22                   76
  39                     1                  113
  40                    20                   37
  41                    10                  135
  42                    70                  120
  43                     3                  117
  44                    77                  131
  45                     1                  160
  46                    67                  113
  47                     1                  111
  48                    64                  109
  49                     2                  154
  50                    65                  111
  51                     1                  114
  52                    72                  107
  53                     0                  195
  54                    70                  117

Have anyone an idear?

The OSD are 8TB disks and have XFS fs...

Thanks

_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



More information about the pve-user mailing list