[Gluster-users] Fuse memleaks, all versions

Pranith Kumar Karampuri pkarampu at redhat.com
Fri Jul 29 16:39:41 UTC 2016


On Fri, Jul 29, 2016 at 2:26 PM, Yannick Perret <
yannick.perret at liris.cnrs.fr> wrote:

> Ok, last try:
> after investigating more versions I found that FUSE client leaks memory on
> all of them.
> I tested:
> - 3.6.7 client on debian 7 32bit and on debian 8 64bit (with 3.6.7
> serveurs on debian 8 64bit)
> - 3.6.9 client on debian 7 32bit and on debian 8 64bit (with 3.6.7
> serveurs on debian 8 64bit)=
> - 3.7.13 client on debian 8 64bit (with 3.8.1 serveurs on debian 8 64bit)
> - 3.8.1 client on debian 8 64bit (with 3.8.1 serveurs on debian 8 64bit)
> In all cases compiled from sources, appart for 3.8.1 where .deb were used
> (due to a configure runtime error).
> For 3.7 it was compiled with --disable-tiering. I also tried to compile
> with --disable-fusermount (no change).
>
> In all of these cases the memory (resident & virtual) of glusterfs process
> on client grows on each activity and never reach a max (and never reduce).
> "Activity" for these tests is cp -Rp and ls -lR.
> The client I let grows the most overreached ~4Go RAM. On smaller machines
> it ends by OOM killer killing glusterfs process or glusterfs dying due to
> allocation error.
>
> In 3.6 mem seems to grow continusly, whereas in 3.8.1 it grows by "steps"
> (430400 ko → 629144 (~1min) → 762324 (~1min) → 827860…).
>
> All tests performed on a single test volume used only by my test client.
> Volume in a basic x2 replica. The only parameters I changed on this volume
> (without any effect) are diagnostics.client-log-level set to ERROR and
> network.inode-lru-limit set to 1024.
>

Could you attach statedumps of your runs?
The following link has steps to capture this(
https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/ ). We
basically need to see what are the memory types that are increasing. If you
could help find the issue, we can send the fixes for your workload. There
is a 3.8.2 release in around 10 days I think. We can probably target this
issue for that?


>
> This clearly prevent us to use glusterfs on our clients. Any way to
> prevent this to happen? I still switched back to NFS mounts but it is not
> what we're looking for.
>
> Regards,
> --
> Y.
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160729/ffb2db96/attachment.html>


More information about the Gluster-users mailing list