[Bugs] [Bug 1738878] New: FUSE client's memory leak
bugzilla at redhat.com
bugzilla at redhat.com
Thu Aug 8 10:38:26 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1738878
Bug ID: 1738878
Summary: FUSE client's memory leak
Product: GlusterFS
Version: 5
OS: Linux
Status: NEW
Component: core
Severity: high
Assignee: bugs at gluster.org
Reporter: s.pleshkov at hostco.ru
CC: bugs at gluster.org
Target Milestone: ---
External Bug ID: Red Hat Bugzilla 1623107,Red Hat Bugzilla 1659432
Classification: Community
Description of problem:
Single FUSE client consume a lot of memory.
In our clients production environment, single FUSE client slowly continiously
eat memory until killed by OOM case
Version-Release number of selected component (if applicable):
Servers
# gluster --version
glusterfs 5.5
rpm -qa | grep glu
glusterfs-libs-5.5-1.el7.x86_64
glusterfs-fuse-5.5-1.el7.x86_64
glusterfs-client-xlators-5.5-1.el7.x86_64
centos-release-gluster5-1.0-1.el7.centos.noarch
glusterfs-api-5.5-1.el7.x86_64
glusterfs-cli-5.5-1.el7.x86_64
nfs-ganesha-gluster-2.7.1-1.el7.x86_64
glusterfs-5.5-1.el7.x86_64
glusterfs-server-5.5-1.el7.x86_64
Client
# gluster --version
glusterfs 5.6
# rpm -qa | grep glus
glusterfs-api-5.6-1.el7.x86_64
glusterfs-libs-5.6-1.el7.x86_64
glusterfs-cli-5.6-1.el7.x86_64
glusterfs-client-xlators-5.6-1.el7.x86_64
glusterfs-fuse-5.6-1.el7.x86_64
glusterfs-5.6-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64
How reproducible:
Setup glusterfs replication cluster (3 node, replicate) with many of small
files.
Mount storage with FUSE client, set some process to work with gluster folder
Read files metadata and writes files content.
This problem rises with one client that have read executetable files and write
logs processes (java|c++ programs) from this gluster volume, other clients same
gluster volume have not this problem when work with read|write processes.
Actual results:
RSS memory of FUSE client grows infinitely.
Expected results:
RSS memory doesn't grow infinitely :)
Additional info:
Get statedumps from problem client, find this results:
pool-name=data_t
active-count=40897046
sizeof-type=72
padded-sizeof=128
size=5234821888
shared-pool=0x7f6bf222aca0
pool-name=dict_t
active-count=40890978
sizeof-type=160
padded-sizeof=256
size=10468090368
shared-pool=0x7f6bf222acc8
Found similar bug - https://bugzilla.redhat.com/show_bug.cgi?id=1623107
Disabled "readdir-ahead" option to volume, but didn't helped
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list