[Gluster-users] Help analise statedumps
pedro at pmc.digital
Mon Feb 4 10:11:53 UTC 2019
The process was `glusterfs`, yes I took the statedump for the same process (different PID since it was rebooted).
From: Sanju Rakonde <srakonde at redhat.com>
Sent: 04 February 2019 06:10
To: Pedro Costa <pedro at pmc.digital>
Cc: gluster-users <gluster-users at gluster.org>
Subject: Re: [Gluster-users] Help analise statedumps
Can you please specify which process has leak? Have you took the statedump of the same process which has leak?
On Sat, Feb 2, 2019 at 3:15 PM Pedro Costa <pedro at pmc.digital<mailto:pedro at pmc.digital>> wrote:
I have a 3x replicated cluster running 4.1.7 on ubuntu 16.04.5, all 3 replicas are also clients hosting a Node.js/Nginx web server.
The current configuration is as such:
Volume Name: gvol1
Volume ID: XXXXXX
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
I believe there’s a memory leak somewhere, it just keeps going up until it hangs one or more nodes taking the whole cluster down sometimes.
I have taken 2 statedumps on one of the nodes, one where the memory is too high and another just after a reboot with the app running and the volume fully healed.
https://pmcdigital.sharepoint.com/:u:/g/EYDsNqTf1UdEuE6B0ZNVPfIBf_I-AbaqHotB1lJOnxLlTg?e=boYP09 (high memory)
https://pmcdigital.sharepoint.com/:u:/g/EWZBsnET2xBHl6OxO52RCfIBvQ0uIDQ1GKJZ1GrnviyMhg?e=wI3yaY (after reboot)
Any help would be greatly appreciated,
Pedro Maia Costa
Senior Developer, pmc.digital
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users