[Gluster-users] Memory leak with glusterfs NFS on 3.2.6

Rajesh Amaravathi rajesh at redhat.com
Thu Jun 21 06:49:19 UTC 2012


Hi all, 
I am looking into this issue, but could not make much from the statedumps. 
I will try to reproduce this issue. If i know what kind of operations (reads, writes, metadata r/ws, etc) are being done, 
and if there are any other configuration changes w.r.t GlusterFS, it'll be of great help. 


Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 
----- Original Message -----

From: "Xavier Normand" <xavier.normand at gmail.com> 
To: "Philip Poten" <philip.poten at gmail.com> 
Cc: gluster-users at gluster.org 
Sent: Tuesday, June 12, 2012 6:32:41 PM 
Subject: Re: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6 

Hi Philip, 


I do have about the same problem that you describe. There is my setup: 


Gluster: Two bricks running gluster 3.2.6 


Clients: 
4 clients running native gluster fuse client. 
2 clients running nfs client 


My nfs client are not doing that much traffic but i was able to view after a couple days that the brick used to mount the nfs is having memory issue. 


i can provide more info as needed to help correct the problem. 


Thank's 


Xavier 









Le 2012-06-12 à 08:18, Philip Poten a écrit : 



2012/6/12 Dan Bretherton < d.a.bretherton at reading.ac.uk >: 

<blockquote>
I wonder if this memory leak is the cause of the NFS performance degradation 



<blockquote>
I reported in April. 

</blockquote>

That's probable, since the performance does go down for us too when 
the glusterfs process reaches a large percentage of RAM. My initial 
guess was that it's the file system cache that's being eradicated, 
thus iowait increases. But a closer look at our munin graphs implies, 
that it's also the user space that eats more and more CPU 
proportionally with RAM: 

http://imgur.com/a/8YfhQ 

There are two restarts of the whole gluster process family visible on 
those graphs: one a week ago at the very beginning (white in the 
memory graph, as munin couldn't fork all it needed), and one 
yesterday. The drop between 8 and 9 was due to a problemv unrelated to 
gluster. 

Pranith: I just made one dump, tomorrow I'll make one more and mail 
them both to you so that you can compare them. While I just restarted 
yesterday, the leak should be visible, as the process grows a few 
hundred MB every day. 

thanks for the fast reply, 
Philip 
_______________________________________________ 
Gluster-users mailing list 
Gluster-users at gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 

</blockquote>


_______________________________________________ 
Gluster-users mailing list 
Gluster-users at gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120621/8c205e1e/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Capture d??cran 2012-06-12 ? 08.59.17.png
Type: image/png
Size: 36146 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120621/8c205e1e/attachment.png>


More information about the Gluster-users mailing list