[Bugs] [Bug 1718734] New: Memory leak in glusterfsd process

bugzilla at redhat.com bugzilla at redhat.com
Mon Jun 10 06:17:59 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1718734

            Bug ID: 1718734
           Summary: Memory leak in glusterfsd process
           Product: GlusterFS
           Version: 5
          Hardware: mips64
                OS: Linux
            Status: NEW
         Component: disperse
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: abhishpaliwal at gmail.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community



Created attachment 1578935
  --> https://bugzilla.redhat.com/attachment.cgi?id=1578935&action=edit
Script to see the memory leak

Description of problem:

We are seeing the memory leak in glusterfsd process when writing and deleting
the specific file in some interval

Version-Release number of selected component (if applicable): Glusterfs 5.4


How reproducible:

Here is the Setup details and test which we are doing as below:


One client, two gluster Server.
The client is writing and deleting one file each 15 minutes by script
test_v4.15.sh.

IP
Server side:
128.224.98.157 /gluster/gv0/
128.224.98.159 /gluster/gv0/

Client side:
128.224.98.160 /gluster_mount/

Server side:
gluster volume create gv0 replica 2 128.224.98.157:/gluster/gv0/
128.224.98.159:/gluster/gv0/ force
gluster volume start gv0

root at 128:/tmp/brick/gv0# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 7105a475-5929-4d60-ba23-be57445d97b5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 128.224.98.157:/gluster/gv0
Brick2: 128.224.98.159:/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

exec script: ./ps_mem.py -p 605 -w 61 > log
root at 128:/# ./ps_mem.py -p 605
Private + Shared = RAM used Program
23668.0 KiB + 1188.0 KiB = 24856.0 KiB glusterfsd
---------------------------------
24856.0 KiB
=================================


Client side:
mount -t glusterfs -o acl -o resolve-gids 128.224.98.157:gv0 /gluster_mount


We are using the below script write and delete the file.

test_v4.15.sh

Also the below script to see the memory increase whihle the script is above
script is running in background.

ps_mem.py

I am attaching the script files as well as the result got after testing the
scenario.


Actual results: Memory leak is present 


Expected results: Leak should not be there


Additional info: Please see the attached file for more details also attaching
the statedumps

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list