[Bugs] [Bug 1657202] New: Possible memory leak in 5.1 brick process
bugzilla at redhat.com
bugzilla at redhat.com
Fri Dec 7 12:59:53 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1657202
Bug ID: 1657202
Summary: Possible memory leak in 5.1 brick process
Product: GlusterFS
Version: 5
Component: core
Severity: urgent
Assignee: bugs at gluster.org
Reporter: rob.dewit at coosto.com
CC: bugs at gluster.org
Created attachment 1512497
--> https://bugzilla.redhat.com/attachment.cgi?id=1512497&action=edit
statedumps
Description of problem: glusterfs process keep on growing
Version-Release number of selected component (if applicable): 5.1
How reproducible: always
Steps to Reproduce:
1. mount gluster volume
2. use
3. wait for process to grow
Actual results:
glusterfs process grows to 10s of gigabytes:
root 24837 27.5 35.2 24051028 23167000 ? Ssl Nov29 3133:46
/usr/sbin/glusterfs --use-readdirp=off --attribute-timeout=600
--entry-timeout=600 --negative-timeout=600 --fuse-mountopts=noatime
--process-name fuse --volfile-server=SERVER --volfile-id=jf-vol0
--fuse-mountopts=noatime /mnt/jf-vol0
Expected results:
glusterfs uses reasonable amounts of memory.
Additional info:
The volume contains a large number (some millions) of small files. Some of
those are python code, hence the negative-timeout mount option (python tries to
open a lot of non-existent files, effectively killing the volume performance).
Attached are four state-dumps. I've added redacted version where unchanged or
up-down values are left out. If I check them with vimdiff, it looks like some
of the values only keep on growing.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list