[Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

Oleksandr Natalenko oleksandr at natalenko.name
Thu Sep 24 08:14:46 UTC 2015


I've checked statedump of volume in question and haven't found lots of 
iobuf as mentioned in that bugreport.

However, I've noticed that there are lots of LRU records like this:

===
[conn.1.bound_xl./bricks/r6sdLV07_vd0_mail/mail.lru.1]
gfid=c4b29310-a19d-451b-8dd1-b3ac2d86b595
nlookup=1
fd-count=0
ref=0
ia_type=1
===

In fact, there are 16383 of them. I've checked "gluster volume set help" 
in order to find something LRU-related and have found this:

===
Option: network.inode-lru-limit
Default Value: 16384
Description: Specifies the maximum megabytes of memory to be used in the 
inode cache.
===

Is there error in description stating "maximum megabytes of memory"? 
Shouldn't it mean "maximum amount of LRU records"? If no, is that true, 
that inode cache could grow up to 16 GiB for client, and one must lower 
network.inode-lru-limit value?

Another thought: we've enabled write-behind, and the default 
write-behind-window-size value is 1 MiB. So, one may conclude that with 
lots of small files written, write-behind buffer could grow up to 
inode-lru-limit×write-behind-window-size=16 GiB? Who could explain that 
to me?

24.09.2015 10:42, Gabi C write:
> oh, my bad...
> coulb be this one?
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1126831 [2]
> Anyway, on ovirt+gluster w I experienced similar behavior...


More information about the Gluster-devel mailing list