[pve-devel] memory consumption

VELARTIS Philipp Dürhammer p.duerhammer at velartis.at
Tue Aug 18 16:34:53 CEST 2015


Hi,

debugging and reading more
if cache=none is set this should overwrite the ceph config from rbd cache = true=> false
but still this machine is consuming 20G instead of 16G


Von: pve-devel [mailto:pve-devel-bounces at pve.proxmox.com] Im Auftrag von VELARTIS Philipp Dürhammer
Gesendet: Dienstag, 18. August 2015 16:01
An: 'Eneko Lacunza'; pve-devel at pve.proxmox.com
Betreff: Re: [pve-devel] memory consumption

Hi Eneko!

I use Proxmox Ceph Server.
I found out that my config for rbd cache size was too big which could explain me the problem.
But when i make:
ceph -n mon.2 --show-config
it shows me the standard config size from ceph. And also that rbd cache = false. Very strange.
Is this not the way to get the running conf?

My client section in the /etc/pve/ceph.conf looks like this:
[client] rbd cache = true
rbd cache size = 268435456
rbd cache max dirty = 33554432
rbd cache max dirty age = 3

but seems to be ignored?
I am using cache=writeback and cache=none. Same results

Von: pve-devel [mailto:pve-devel-bounces at pve.proxmox.com] Im Auftrag von Eneko Lacunza
Gesendet: Dienstag, 18. August 2015 14:36
An: pve-devel at pve.proxmox.com<mailto:pve-devel at pve.proxmox.com>
Betreff: Re: [pve-devel] memory consumption

Hi Philipp,

What Ceph packages?

We have 3 small clusters using creph/rbd and I'm not seeing this. Biggest VM is 6GB of RAM though, biggest disk is 200GB and a VM has a total of 425GB in 5 disks.

Are you using Proxmox Ceph Server?
What cache setting in VM harddisk config?

Cheers
Eneko

El 18/08/15 a las 14:22, VELARTIS Philipp Dürhammer escribió:
Hi,

some of my virtual machines on different servers are consuming a lot more ram then they should!
(up to twice) which is a horror if the machines have about 15-20G but consume 30-40GB
I was checking:
) all machines have the same pve packages installed
) it only happens on machines with ceph osd installed
) which also are using rbd client for virtual hdds

It does not happen if they are using nfs backend for virtual hdd. Or other machines which are just using the rbd client but doent have osd's...

Has somebody experienced something like this also?
Any ideas how to debug further?

Machines config:
proxmox-ve-2.6.32: 3.3-147 (running kernel: 3.10.0-7-pve)
pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a)
pve-kernel-3.10.0-7-pve: 3.10.0-27
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-3.10.0-3-pve: 3.10.0-11
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1



_______________________________________________

pve-devel mailing list

pve-devel at pve.proxmox.com<mailto:pve-devel at pve.proxmox.com>

http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



--

Zuzendari Teknikoa / Director Técnico

Binovo IT Human Project, S.L.

Telf. 943575997

      943493611

Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)

www.binovo.es<http://www.binovo.es>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20150818/6145d1e7/attachment.htm>


More information about the pve-devel mailing list