[Gluster-users] Memory leak with glusterfs NFS on 3.2.6

Philip Poten philip.poten at gmail.com
Thu Jun 21 07:33:53 UTC 2012


Hi Rajesh,

We are handling only small files up to 10MB mainly in the 5-250kB range -
in short, images in a flat structure of directories. Since there is
a varnish setup facing the internet, my guess would be that reads and
writes are somwhat balanced, i.e. not in excessive relation to each other.
But still way more reads than writes.

Files are almost never truncated, altered or deleted. I'm not sure if the
backend writes resized images by creating and renaming them on gluster or
by moving them onto gluster.

The munin graph looks as if the memory consumption grows faster during
heavy usage.

"gluster volume top operations" returns with the usage help, so I can't
help you with that.

Options Reconfigured:
performance.quick-read: off
performance.cache-size: 64MB
performance.io-thread-count: 64
performance.io-cache: on
performance.stat-prefetch: on

I would gladly deploy a patched 3.2.6 deb package for better debugging or
help you with any other measure that doesn't require us to take it offline
for more than a minute.

thanks for looking into that!

kind regards,
Philip

2012/6/21 Rajesh Amaravathi <rajesh at redhat.com>
>
> Hi all,
> I am looking into this issue, but could not make much from the statedumps.
> I will try to reproduce this issue. If i know what kind of operations
(reads, writes, metadata r/ws, etc) are being done,
> and if there are any other configuration changes w.r.t GlusterFS, it'll
be of great help.
>
> Regards,
> Rajesh Amaravathi,
> Software Engineer, GlusterFS
> RedHat Inc.
> ________________________________
> From: "Xavier Normand" <xavier.normand at gmail.com>
> To: "Philip Poten" <philip.poten at gmail.com>
> Cc: gluster-users at gluster.org
> Sent: Tuesday, June 12, 2012 6:32:41 PM
> Subject: Re: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6
>
>
> Hi Philip,
>
> I do have about the same problem that you describe. There is my setup:
>
> Gluster: Two bricks running gluster 3.2.6
>
> Clients:
> 4 clients running native gluster fuse client.
> 2 clients running nfs client
>
> My nfs client are not doing that much traffic but i was able to view
after a couple days that the brick used to mount the nfs is having memory
issue.
>
> i can provide more info as needed to help correct the problem.
>
> Thank's
>
> Xavier
>
>
>
> Le 2012-06-12 à 08:18, Philip Poten a écrit :
>
> 2012/6/12 Dan Bretherton <d.a.bretherton at reading.ac.uk>:
>
> I wonder if this memory leak is the cause of the NFS performance
degradation
>
> I reported in April.
>
>
> That's probable, since the performance does go down for us too when
> the glusterfs process reaches a large percentage of RAM. My initial
> guess was that it's the file system cache that's being eradicated,
> thus iowait increases. But a closer look at our munin graphs implies,
> that it's also the user space that eats more and more CPU
> proportionally with RAM:
>
> http://imgur.com/a/8YfhQ
>
> There are two restarts of the whole gluster process family visible on
> those graphs: one a week ago at the very beginning (white in the
> memory graph, as munin couldn't fork all it needed), and one
> yesterday. The drop between 8 and 9 was due to a problemv unrelated to
> gluster.
>
> Pranith: I just made one dump, tomorrow I'll make one more and mail
> them both to you so that you can compare them. While I just restarted
> yesterday, the leak should be visible, as the process grows a few
> hundred MB every day.
>
> thanks for the fast reply,
> Philip
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120621/0199ff05/attachment.html>


More information about the Gluster-users mailing list