[Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client
rabhat at redhat.com
Mon Sep 28 11:31:41 UTC 2015
You are right. The description should have said it as the limit on the
number of inodes in the lru list of the inode cache. I have sent a patch
On Thu, Sep 24, 2015 at 1:44 PM, Oleksandr Natalenko <
oleksandr at natalenko.name> wrote:
> I've checked statedump of volume in question and haven't found lots of
> iobuf as mentioned in that bugreport.
> However, I've noticed that there are lots of LRU records like this:
> In fact, there are 16383 of them. I've checked "gluster volume set help"
> in order to find something LRU-related and have found this:
> Option: network.inode-lru-limit
> Default Value: 16384
> Description: Specifies the maximum megabytes of memory to be used in the
> inode cache.
> Is there error in description stating "maximum megabytes of memory"?
> Shouldn't it mean "maximum amount of LRU records"? If no, is that true,
> that inode cache could grow up to 16 GiB for client, and one must lower
> network.inode-lru-limit value?
> Another thought: we've enabled write-behind, and the default
> write-behind-window-size value is 1 MiB. So, one may conclude that with
> lots of small files written, write-behind buffer could grow up to
> inode-lru-limit×write-behind-window-size=16 GiB? Who could explain that to
> 24.09.2015 10:42, Gabi C write:
>> oh, my bad...
>> coulb be this one?
>> https://bugzilla.redhat.com/show_bug.cgi?id=1126831 
>> Anyway, on ovirt+gluster w I experienced similar behavior...
> Gluster-devel mailing list
> Gluster-devel at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-devel