[Bugs] [Bug 1781550] New: glusterfs process memory leak in ior test

bugzilla at redhat.com bugzilla at redhat.com
Tue Dec 10 09:17:24 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1781550

            Bug ID: 1781550
           Summary: glusterfs process memory leak in ior test
           Product: Red Hat Gluster Storage
           Version: rhgs-3.5
          Hardware: x86_64
                OS: Linux
            Status: NEW
         Component: read-ahead
          Keywords: Triaged
          Severity: high
          Assignee: rgowdapp at redhat.com
          Reporter: hgowtham at redhat.com
        QA Contact: rhinduja at redhat.com
                CC: bugs at gluster.org, jahernan at redhat.com, pasik at iki.fi,
                    rhs-bugs at redhat.com, shujun.huang at nokia-sbell.com,
                    zz.sh.cynthia at gmail.com
        Depends On: 1779055
  Target Milestone: ---
    Classification: Red Hat



+++ This bug was initially created as a clone of Bug #1779055 +++

Description of problem:
when test with ior tool glusterfs client process memory leak found
no else op is carry on , only do io through gluster client process.
glusterfs client process eat up more and more memory
Version-Release number of selected component (if applicable):
# glusterfs -V
glusterfs 7.0
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


How reproducible:


Steps to Reproduce:
1.begin io with ior tool :
python /opt/tool/ior -mfx -s 1000 -n 100 -t 10 /mnt/testvol/testdir
2.statedump of the glusterfs cleint process
3.from statedump the glusterfs client process memory is increasing,even after
stop test and delete all created files

Actual results:
memory goes up never go back even after remove all created files

Expected results:
memory back to normal

Additional info:
from statedump,xlator.mount.fuse.itable.active_size seems keep increasing
from enclosed statedump you can find this.

--- Additional comment from zhou lin on 2019-12-03 07:57:23 UTC ---

# gluster v info config

Volume Name: config
Type: Replicate
Volume ID: e4690308-7345-4e32-8d31-b13e10e87112
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.30:/mnt/bricks/config/brick
Brick2: 169.254.0.28:/mnt/bricks/config/brick
Options Reconfigured:
performance.client-io-threads: off
server.allow-insecure: on
network.frame-timeout: 180
network.ping-timeout: 42
cluster.consistent-metadata: off
cluster.favorite-child-policy: mtime
cluster.server-quorum-type: none
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
cluster.server-quorum-ratio: 51

--- Additional comment from zhou lin on 2019-12-04 02:54:53 UTC ---

from statedump it is quite obvious that
xlator.mount.fuse.itable.active_size keeps growing
thera are more and more follwing sections appearing in statedump

[xlator.mount.fuse.itable.active.1]
gfid=924e4dde-79a5-471b-9a6e-7d769f0bae61
nlookup=0
fd-count=0
active-fd-count=0
ref=100
invalidate-sent=0
ia_type=2
ref_by_xl:.hsjvol-client-0=1
ref_by_xl:.hsjvol-readdir-ahead=99

--- Additional comment from Worker Ant on 2019-12-05 05:55:21 UTC ---

REVIEW: https://review.gluster.org/23811 (To fix readdir-ahead memory leak)
posted (#1) for review on master by None

--- Additional comment from Worker Ant on 2019-12-05 06:37:29 UTC ---

REVIEW: https://review.gluster.org/23812 (To fix readdir-ahead memory leak)
posted (#1) for review on master by None

--- Additional comment from Worker Ant on 2019-12-05 07:05:12 UTC ---

REVIEW: https://review.gluster.org/23813 (To fix readdir-ahead memory leak)
posted (#1) for review on master by None

--- Additional comment from Worker Ant on 2019-12-05 08:08:58 UTC ---

REVIEW: https://review.gluster.org/23815 (To fix readdir-ahead memory leak)
posted (#1) for review on master by None

--- Additional comment from Worker Ant on 2019-12-10 05:01:24 UTC ---

REVIEW: https://review.gluster.org/23815 (To fix readdir-ahead memory leak)
merged (#2) on master by Amar Tumballi


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1779055
[Bug 1779055] glusterfs process memory leak in ior test
-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list