[Bugs] [Bug 1623107] FUSE client's memory leak
bugzilla at redhat.com
bugzilla at redhat.com
Sat Dec 29 03:13:37 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1623107
--- Comment #29 from Yan <y.zhao at nokia.com> ---
Please refer the attachment 1517308 for statedump output every half hour.
1). Test is done with 5.1. Similar issue has been observed in 3.12.13, 4.1.4 as
well.
# gluster --version
glusterfs 5.1
2). The above statedump output is done with a "find" operation every second on
the mount dir.
#!/bin/sh
a=0
while [ $a -lt 36000 ]
do
find /gserver_mount1/ -type f > /dev/null
sleep 1
#a=`expr $a + 1`
done
3). Here's the volume info with readdir_ahead on.
# gluster volume info
Volume Name: glustervol1
Type: Replicate
Volume ID: 47aecf8c-de2f-43a5-8cab-64832bd28bd1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.76.65.246:/mnt/data1/1
Brick2: 10.76.65.247:/mnt/data1/1
Brick3: 10.76.65.248:/mnt/data1/1
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.read-ahead: on
performance.readdir-ahead: on
4). Memory leakage is reduced with readdir_ahead off ( refer Bug 1659432 ).
5). Below section has an invalid size number.
[mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]
size=18446744073709524184
num_allocs=1
max_size=18446744073709551608
max_num_allocs=22
total_allocs=559978326
--
You are receiving this mail because:
You are on the CC list for the bug.
More information about the Bugs
mailing list