[Bugs] [Bug 1501146] New: FUSE client Memory usage issue

bugzilla at redhat.com bugzilla at redhat.com
Thu Oct 12 07:15:59 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1501146

            Bug ID: 1501146
           Summary: FUSE client Memory usage issue
           Product: GlusterFS
           Version: 3.10
         Component: fuse
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: joshua.coyle at probax.io
                CC: bugs at gluster.org



Created attachment 1337554
  --> https://bugzilla.redhat.com/attachment.cgi?id=1337554&action=edit
Gluster State Dump

Description of problem:

The glusterfs process on client which use the client FUSE mount consume as much
system memory and swap allocation as they can over time, eventually leading to
the process being killed due to OOM and the mount dropping.
This occurs after a large amount of data (Both size and file count, although
I've not been able to rule out one over the other, as this machine does both
regularly) has been transferred over the mount point. 

Version-Release number of selected component (if applicable):

glusterfs 3.10.3

How reproducible:

Highly consistently

Steps to Reproduce:
1.Mount gluster volume via FUSE client
2.Transfer a lot of data
3.Watch Mem usage on glusterfs process increase over time

Actual results:

Memory usage increases over time eventually leading to the glusterfs process
being killed by OOM and the mount dropping

Expected results:

For the glusterfs process to release the memory it is consuming to avoid OOM
issues.

Additional info:

Gluster volume version is 3.10.3
I have one client on 3.10.3 and one client on 3.11.3, both experience the same
issue.
This only occurs on clients which pass a large amount of traffic consistently
(100s of GB daily).
These mounts also process a large number of concurrent connections (up to 50 at
a time) which may be playing some part in the issue.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list