[Gluster-users] Fuse memleaks, all versions

Yannick Perret yannick.perret at liris.cnrs.fr
Fri Jul 29 19:47:36 UTC 2016


Le 29/07/2016 20:27, Pranith Kumar Karampuri a écrit :
>
>
> On Fri, Jul 29, 2016 at 10:09 PM, Pranith Kumar Karampuri 
> <pkarampu at redhat.com <mailto:pkarampu at redhat.com>> wrote:
>
>
>
>     On Fri, Jul 29, 2016 at 2:26 PM, Yannick Perret
>     <yannick.perret at liris.cnrs.fr
>     <mailto:yannick.perret at liris.cnrs.fr>> wrote:
>
>         Ok, last try:
>         after investigating more versions I found that FUSE client
>         leaks memory on all of them.
>         I tested:
>         - 3.6.7 client on debian 7 32bit and on debian 8 64bit (with
>         3.6.7 serveurs on debian 8 64bit)
>         - 3.6.9 client on debian 7 32bit and on debian 8 64bit (with
>         3.6.7 serveurs on debian 8 64bit)=
>         - 3.7.13 client on debian 8 64bit (with 3.8.1 serveurs on
>         debian 8 64bit)
>         - 3.8.1 client on debian 8 64bit (with 3.8.1 serveurs on
>         debian 8 64bit)
>         In all cases compiled from sources, appart for 3.8.1 where
>         .deb were used (due to a configure runtime error).
>         For 3.7 it was compiled with --disable-tiering. I also tried
>         to compile with --disable-fusermount (no change).
>
>         In all of these cases the memory (resident & virtual) of
>         glusterfs process on client grows on each activity and never
>         reach a max (and never reduce).
>         "Activity" for these tests is cp -Rp and ls -lR.
>         The client I let grows the most overreached ~4Go RAM. On
>         smaller machines it ends by OOM killer killing glusterfs
>         process or glusterfs dying due to allocation error.
>
>         In 3.6 mem seems to grow continusly, whereas in 3.8.1 it grows
>         by "steps" (430400 ko → 629144 (~1min) → 762324 (~1min) →
>         827860…).
>
>         All tests performed on a single test volume used only by my
>         test client. Volume in a basic x2 replica. The only parameters
>         I changed on this volume (without any effect) are
>         diagnostics.client-log-level set to ERROR and
>         network.inode-lru-limit set to 1024.
>
>
>     Could you attach statedumps of your runs?
>     The following link has steps to capture
>     this(https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/
>     ). We basically need to see what are the memory types that are
>     increasing. If you could help find the issue, we can send the
>     fixes for your workload. There is a 3.8.2 release in around 10
>     days I think. We can probably target this issue for that?
>
>
> hi,
>          We found a problem here: 
> https://bugzilla.redhat.com/show_bug.cgi?id=1361681#c0, Based on 
> git-blame this bug is in existence from 2012-Aug may be even longer. I 
> am wondering if you guys are running into this. Do you guys want to 
> help test the fix if we provide this? I don't think lot of others ran 
> into this problem I guess.
Yes I saw that this seems to be a long-running bug.
I'm surprise that it don't hit too much other people because I'm really 
using a very simple and basic configuration (replica x2 servers + fuse 
clients which is a basic tuto in glusterfs docs). Maybe few people use 
the fuse client, or maybe only in a mount-use-umount manner.

I will send reports as explained in your previous mail.
I have 2 servers and 1 client that are tests machines so I can do what I 
want on them. I can also apply patches as I use build-from-sources 
servers/client (and the memory leak is easy and fast to check: with 
intensive activity I can go from ~140Mo to >2Go in less than 2 hours).

Note: I had a problem with 3.8.1 sources → running ./configure claims about:
configure: WARNING: cache variable ac_cv_build contains a newline
configure: WARNING: cache variable ac_cv_host contains a newline
and calling 'make' tells me:
Makefile:90: *** missing separator (did you mean TAB instead of 8 
spaces?). Stop.
That's why I used the .deb from gusterfs downloads instead of sources 
for this version.

--
Y.
>
>
>
>
>         This clearly prevent us to use glusterfs on our clients. Any
>         way to prevent this to happen? I still switched back to NFS
>         mounts but it is not what we're looking for.
>
>         Regards,
>         --
>         Y.
>
>
>
>         _______________________________________________
>         Gluster-users mailing list
>         Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>         http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
>     -- 
>     Pranith
>
>
>
>
> -- 
> Pranith

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160729/1d88c10c/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3369 bytes
Desc: Signature cryptographique S/MIME
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160729/1d88c10c/attachment.p7s>


More information about the Gluster-users mailing list