[Bugs] [Bug 1207735] New: Disperse volume: Huge memory leak of glusterfsd process
bugzilla at redhat.com
bugzilla at redhat.com
Tue Mar 31 15:11:28 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1207735
Bug ID: 1207735
Summary: Disperse volume: Huge memory leak of glusterfsd
process
Product: GlusterFS
Version: mainline
Component: disperse
Assignee: bugs at gluster.org
Reporter: byarlaga at redhat.com
CC: bugs at gluster.org, gluster-bugs at redhat.com
Created attachment 1009107
--> https://bugzilla.redhat.com/attachment.cgi?id=1009107&action=edit
statedump of node1
Description of problem:
=======================
There's a huge memory leak in glusterfsd process with disperse volume. Created
a plain disperse volume and converted to distributed-disperse. There's no IO
from the clients but seeing the resident memory reaching upto 20GB as seen from
top command for the glusterfsd process and the system becomes unresponsive as
the whole memory gets consumed.
Version-Release number of selected component (if applicable):
=============================================================
[root at vertigo geo-master]# gluster --version
glusterfs 3.7dev built on Mar 31 2015 01:05:54
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
Additional info:
================
Top output of node1:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
9902 root 20 0 4321m 1.4g 2920 D 20.0 4.5 1:28.68 glusterfsd
10758 root 20 0 4321m 1.4g 2920 D 18.4 4.5 1:26.33 glusterfsd
10053 root 20 0 4961m 1.6g 2920 D 18.1 5.2 1:28.64 glusterfsd
10729 root 20 0 3681m 1.0g 2920 D 17.1 3.3 1:26.60 glusterfsd
10759 root 20 0 4321m 1.4g 2920 S 17.1 4.5 1:25.68 glusterfsd
10756 root 20 0 3745m 1.4g 2920 S 16.4 4.6 1:30.05 glusterfsd
9939 root 20 0 4321m 1.4g 2920 S 16.4 4.5 1:27.61 glusterfsd
10775 root 20 0 4961m 1.6g 2920 D 15.8 5.2 1:26.52 glusterfsd
10723 root 20 0 3745m 1.4g 2920 S 15.8 4.6 1:32.41 glusterfsd
10728 root 20 0 34.0g 19g 2920 S 15.8 63.3 1:31.89 glusterfsd
10054 root 20 0 3681m 1.0g 2920 D 15.8 3.3 1:28.10 glusterfsd
10090 root 20 0 3681m 1.0g 2920 S 15.8 3.3 1:33.02 glusterfsd
10789 root 20 0 3681m 1.0g 2920 D 15.8 3.3 1:26.16 glusterfsd
10739 root 20 0 4961m 1.6g 2920 D 15.4 5.2 1:31.29 glusterfsd
10763 root 20 0 4961m 1.6g 2920 S 15.4 5.2 1:27.03 glusterfsd
10727 root 20 0 34.0g 19g 2920 S 15.4 63.3 1:31.35 glusterfsd
10782 root 20 0 34.0g 19g 2920 S 15.4 63.3 1:31.86 glusterfsd
10062 root 20 0 3425m 1.1g 2920 S 15.4 3.5 1:44.85 glusterfsd
10783 root 20 0 3681m 1.0g 2920 D 15.4 3.3 1:26.73 glusterfsd
9940 root 20 0 4321m 1.4g 2920 S 15.4 4.5 1:28.84 glusterfsd
10724 root 20 0 4321m 1.4g 2920 D 15.4 4.5 1:25.27 glusterfsd
10753 root 20 0 4321m 1.4g 2920 S 15.4 4.5 1:26.44 glusterfsd
10733 root 20 0 3745m 1.4g 2920 R 15.1 4.6 1:28.42 glusterfsd
10755 root 20 0 3745m 1.4g 2920 S 15.1 4.6 1:31.19 glusterfsd
10091 root 20 0 34.0g 19g 2920 S 15.1 63.3 1:33.56 glusterfsd
10778 root 20 0 34.0g 19g 2920 S 15.1 63.3 1:31.88 glusterfsd
9894 root 20 0 3681m 1.0g 2920 D 15.1 3.3 1:32.51 glusterfsd
10736 root 20 0 3681m 1.0g 2920 S 15.1 3.3 1:27.33 glusterfsd
10746 root 20 0 4321m 1.4g 2920 D 15.1 4.5 1:25.14 glusterfsd
10744 root 20 0 4961m 1.6g 2920 S 14.8 5.2 1:29.22 glusterfsd
10743 root 20 0 3745m 1.4g 2920 S 14.8 4.6 1:29.96 glusterfsd
10784 root 20 0 34.0g 19g 2920 S 14.8 63.3 1:31.92 glusterfsd
9735 root 20 0 4961m 1.6g 2920 S 14.4 5.2 1:28.84 glusterfsd
9903 root 20 0 4961m 1.6g 2920 S 14.4 5.2 1:28.63 glusterfsd
Attaching the statedumps of the volumes.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list