[Gluster-devel] GlusterFS FUSE client leaks summary — part I
Xavier Hernandez
xhernandez at datalab.es
Sat Jan 30 21:56:37 UTC 2016
There's another inode leak caused by an incorrect counting of
lookups on directory reads.
Here's a patch that solves the problem for
3.7:
http://review.gluster.org/13324
Hopefully with this patch the
memory leaks should disapear.
Xavi
On 29.01.2016 19:09, Oleksandr
Natalenko wrote:
> Here is intermediate summary of current memory
leaks in FUSE client
> investigation.
>
> I use GlusterFS v3.7.6
release with the following patches:
>
> ===
> Kaleb S KEITHLEY (1):
>
fuse: use-after-free fix in fuse-bridge, revisited
>
> Pranith Kumar K
(1):
> mount/fuse: Fix use-after-free crash
>
> Soumya Koduri (3):
>
gfapi: Fix inode nlookup counts
> inode: Retire the inodes from the lru
list in inode_table_destroy
> upcall: free the xdr* allocations
> ===
>
> With those patches we got API leaks fixed (I hope, brief tests show
that) and
> got rid of "kernel notifier loop terminated" message.
Nevertheless, FUSE
> client still leaks.
>
> I have several test
volumes with several million of small files (100K…2M in
> average). I
do 2 types of FUSE client testing:
>
> 1) find /mnt/volume -type d
> 2)
rsync -av -H /mnt/source_volume/* /mnt/target_volume/
>
> And most
up-to-date results are shown below:
>
> === find /mnt/volume -type d
===
>
> Memory consumption: ~4G
> Statedump:
https://gist.github.com/10cde83c63f1b4f1dd7a
> Valgrind:
https://gist.github.com/097afb01ebb2c5e9e78d
>
> I guess,
fuse-bridge/fuse-resolve. related.
>
> === rsync -av -H
/mnt/source_volume/* /mnt/target_volume/ ===
>
> Memory consumption:
~3.3...4G
> Statedump (target volume):
https://gist.github.com/31e43110eaa4da663435
> Valgrind (target volume):
https://gist.github.com/f8e0151a6878cacc9b1a
>
> I guess,
DHT-related.
>
> Give me more patches to test :).
>
_______________________________________________
> Gluster-devel mailing
list
> Gluster-devel at gluster.org
>
http://www.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160130/c1b51822/attachment.html>
More information about the Gluster-devel
mailing list