[Bugs] [Bug 1623107] FUSE client's memory leak

bugzilla at redhat.com bugzilla at redhat.com
Mon Dec 31 05:09:01 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1623107

Nithya Balachandran <nbalacha at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|                            |needinfo?(y.zhao at nokia.com)



--- Comment #32 from Nithya Balachandran <nbalacha at redhat.com> ---
(In reply to Yan from comment #29)
> Please refer the attachment 1517308 [details] for statedump output every
> half hour. 
> 
> 1). Test is done with 5.1. Similar issue has been observed in 3.12.13, 4.1.4
> as well. 
> 
> # gluster --version
> glusterfs 5.1
> 
> 2). The above statedump output is done with a "find" operation every second
> on the mount dir. 
> 


Observations:

For the statedumps captured with readdir-ahead off:
1. There is no increase in the number of inodes. Did you see the memory rising
continuously after the first run (where gluster will create and cache the
inodes) while the test was run in this case? If yes, how much?
2. The number if dict_t allocs also remains constant, though high. Was
readdir-ahead on initially for this volume and was it remounted after turning
readdir-ahead off? If not, the high number is due to the leak when
readdir-ahead was enabled as the memory will not be freed if readdir-ahead is
disabled but the volume is not remounted.

With readdir-ahead on:
1. The inodes do not increase. dict_t does but that is a known issue so I would
expect the memory to rise constantly.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list