[Bugs] [Bug 1623107] FUSE client's memory leak

bugzilla at redhat.com bugzilla at redhat.com
Thu Jan 3 05:58:35 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1623107



--- Comment #36 from Nithya Balachandran <nbalacha at redhat.com> ---
(In reply to Znamensky Pavel from comment #33)
> (In reply to Nithya Balachandran from comment #31)
> > Then it is likely to be because the fuse client does not invalidate inodes.
> > Does your workload access a lot of files? The earlier statedump showed
> > around 3 million inodes in memory. 
> >
> >...
> >
> > https://review.gluster.org/#/c/glusterfs/+/19778/ has a fix to invalidate
> > inodes but is not targeted for release 5 as yet.
> 
> 
> Nithya, you're right!
> I built glusterfs from the current master
> (https://github.com/gluster/glusterfs/tree/
> d9a8ccd354df6db94477bf9ecb09735194523665) with the new invalidate inodes
> mechanism that you mentioned before, and RSS memory consumption indeed
> became much lower.
> And as you supposed our apps quite often access a lot of files.
> Here are two tests with clients on v6dev and v4.1 (the server is still on
> v4.1 and read-ahead=on)
> 
> The first test with default --lru-limit=0 (just did `find /in/big/dir -type
> f`):
> 
> v4.1 - ~3GB RSS:
> USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> root       633  6.6 18.5 3570216 3056136 ?     Ssl  19:44   6:25
> /usr/sbin/glusterfs --read-only --process-name fuse --volfile-server=srv
> --volfile-id=/st1 /mnt/st1
> 
> v6dev - ~1.5GB RSS:
> USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> root     10851 16.5  9.2 2071036 1526456 ?     Ssl  19:45  15:50
> /usr/sbin/glusterfs --read-only --process-name fuse --volfile-server=srv
> --volfile-id=/st1 /mnt/st1
> 
> It looks good. Let's do the next test.
> The second test with --lru-limit=10_000 for v6dev:
> 
> v4.1 - ~3GB RSS:
> USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> root      3589  4.7 18.6 3570216 3060364 ?     Ssl  13:11  18:40
> /usr/sbin/glusterfs --process-name fuse --volfile-server=srv
> --volfile-id=/st1 /mnt/st1
> 
> v6dev - ~170MB RSS:
> USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> root     24152 14.2  1.0 758768 173704 ?       Ssl  13:58  49:06
> /usr/sbin/glusterfs --lru-limit=10000 --process-name fuse
> --volfile-server=srv --volfile-id=/st1 /mnt/st1
> 
> 170MB vs. 3GB!
> It's incredible!
> Unfortunately, the new version has a drawback - CPU time increased 2.5x
> times. At the moment it doesn't matter for us.
> Anyway, I'm sure this change solves our problem. And of course, we're
> looking forward to a stable version with it.
> Thank you a lot!

Thank you for testing this. I'm glad to hear the patch is working as expected
to keep the memory use down.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list