[Gluster-devel] valgrind logs for glusterfs-3.4 memory leak

Joe Julian joe at julianfamily.org
Fri Oct 17 06:23:41 UTC 2014


Mine is caused with qcow2 images used by kvm on a fuse mount. About a 
half dozen very busy images causes the leak pretty consistently.

Apparently, though I never got a chance to check it myself or collect 
any details, we had jira building and tearing down VM images, also on a 
fuse mount, which would fill up all available memory in about an hour.

On 10/16/2014 11:08 PM, Pranith Kumar Karampuri wrote:
> hi Kaleb,
>       I went through the logs. I don't see anything significant. What 
> is the test case that recreates the mem-leak? May be I can try it on 
> my setup and get back to you?
>
> Pranith
> On 10/15/2014 08:57 PM, Kaleb S. KEITHLEY wrote:
>> As mentioned in the Gluster Community Meeting on irc today, here are 
>> the glusterfs client side valgrind logs. By 'glusterfs client side' I 
>> specifically mean the glusterfs fuse bridge daemon on the client.
>>
>> http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/valgrind-3.4-memleak/ 
>>
>>
>> The basic test is, simply, mount the gluster volume, make a deep 
>> directory path on the volume, e.g. /mnt/a/b/c/d/e/f/g, do three or 
>> five `ls -R /mnt`, and unmount.
>>
>> tmp[35]/glusterfs.fuse.out are the logs from three or five ls -R.
>>
>> tmp[35]+/glusterfs.fuse.out are the logs as above, but with the 
>> addition that the directories are populated with a few files.
>>
>> Notice that, e.g. both tmp[35]/glusterfs.fuse.out show approximately 
>> the same amount of {definitely,indirectly,possibly} lost memory. I.e. 
>> the number of `ls -R` invocations did not affect how much memory was 
>> leaked.
>>
>> The same is true for tmp[35]+/glusterfs.fuse.out. I.e. more `ls -R` 
>> did not affect the amount of memory leaked, _but_ notice that when 
>> the directories are populated with files that more memory was leaked, 
>> across the board, than when the directories were empty.
>>
>> Make sense? Any questions, don't hesitate to ask.
>>
>> Thanks,
>>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel



More information about the Gluster-devel mailing list