[Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

Oleksandr Natalenko oleksandr at natalenko.name
Wed Jan 20 00:11:02 UTC 2016


And another statedump of FUSE mount client consuming more than 7 GiB of RAM:

https://gist.github.com/136d7c49193c798b3ade

DHT-related leak?

On середа, 13 січня 2016 р. 16:26:59 EET Soumya Koduri wrote:
> On 01/13/2016 04:08 PM, Soumya Koduri wrote:
> > On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote:
> >> Just in case, here is Valgrind output on FUSE client with 3.7.6 +
> >> API-related patches we discussed before:
> >> 
> >> https://gist.github.com/cd6605ca19734c1496a4
> > 
> > Thanks for sharing the results. I made changes to fix one leak reported
> > there wrt ' client_cbk_cache_invalidation' -
> > 
> >      - http://review.gluster.org/#/c/13232/
> > 
> > The other inode* related memory reported as lost is mainly (maybe)
> > because fuse client process doesn't cleanup its memory (doesn't use
> > fini()) while exiting the process.  Hence majority of those allocations
> > are listed as lost. But most of the inodes should have got purged when
> > we drop vfs cache. Did you do drop vfs cache before exiting the process?
> > 
> > I shall add some log statements and check that part
> 
> Also please take statedump of the fuse mount process (after dropping vfs
> cache) when you see high memory usage by issuing the following command -
> 	'kill -USR1 <pid-of-gluster-process>'
> 
> The statedump will be copied to 'glusterdump.<pid>.dump.tim
> estamp` file in /var/run/gluster or /usr/local/var/run/gluster.
> Please refer to [1] for more information.
> 
> Thanks,
> Soumya
> [1] http://review.gluster.org/#/c/8288/1/doc/debugging/statedump.md
> 
> > Thanks,
> > Soumya
> > 
> >> 12.01.2016 08:24, Soumya Koduri написав:
> >>> For fuse client, I tried vfs drop_caches as suggested by Vijay in an
> >>> earlier mail. Though all the inodes get purged, I still doesn't see
> >>> much difference in the memory footprint drop. Need to investigate what
> >>> else is consuming so much memory here.
> > 
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users




More information about the Gluster-devel mailing list