[Bugs] [Bug 1789336] New: glusterfs process memory leak in ior test

bugzilla at redhat.com bugzilla at redhat.com
Thu Jan 9 11:53:31 UTC 2020


https://bugzilla.redhat.com/show_bug.cgi?id=1789336

            Bug ID: 1789336
           Summary: glusterfs process memory leak in ior test
           Product: GlusterFS
           Version: 7
          Hardware: x86_64
                OS: Linux
            Status: NEW
         Component: read-ahead
          Keywords: Triaged
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: jahernan at redhat.com
                CC: bugs at gluster.org, jahernan at redhat.com, pasik at iki.fi,
                    shujun.huang at nokia-sbell.com, zz.sh.cynthia at gmail.com
        Depends On: 1779055
            Blocks: 1781550
  Target Milestone: ---
    Classification: Community



+++ This bug was initially created as a clone of Bug #1779055 +++

Description of problem:
when test with ior tool glusterfs client process memory leak found
no else op is carry on , only do io through gluster client process.
glusterfs client process eat up more and more memory
Version-Release number of selected component (if applicable):
# glusterfs -V
glusterfs 7.0
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


How reproducible:


Steps to Reproduce:
1.begin io with ior tool :
python /opt/tool/ior -mfx -s 1000 -n 100 -t 10 /mnt/testvol/testdir
2.statedump of the glusterfs cleint process
3.from statedump the glusterfs client process memory is increasing,even after
stop test and delete all created files

Actual results:
memory goes up never go back even after remove all created files

Expected results:
memory back to normal

Additional info:
from statedump,xlator.mount.fuse.itable.active_size seems keep increasing
from enclosed statedump you can find this.

--- Additional comment from zhou lin on 2019-12-03 08:57:23 CET ---

# gluster v info config

Volume Name: config
Type: Replicate
Volume ID: e4690308-7345-4e32-8d31-b13e10e87112
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.30:/mnt/bricks/config/brick
Brick2: 169.254.0.28:/mnt/bricks/config/brick
Options Reconfigured:
performance.client-io-threads: off
server.allow-insecure: on
network.frame-timeout: 180
network.ping-timeout: 42
cluster.consistent-metadata: off
cluster.favorite-child-policy: mtime
cluster.server-quorum-type: none
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
cluster.server-quorum-ratio: 51

--- Additional comment from zhou lin on 2019-12-04 03:54:53 CET ---

from statedump it is quite obvious that
xlator.mount.fuse.itable.active_size keeps growing
thera are more and more follwing sections appearing in statedump

[xlator.mount.fuse.itable.active.1]
gfid=924e4dde-79a5-471b-9a6e-7d769f0bae61
nlookup=0
fd-count=0
active-fd-count=0
ref=100
invalidate-sent=0
ia_type=2
ref_by_xl:.hsjvol-client-0=1
ref_by_xl:.hsjvol-readdir-ahead=99


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1779055
[Bug 1779055] glusterfs process memory leak in ior test
https://bugzilla.redhat.com/show_bug.cgi?id=1781550
[Bug 1781550] glusterfs process memory leak in ior test
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list