[Bugs] [Bug 1730948] New: [Glusterfs4.1.9] memory leak in fuse mount process.
bugzilla at redhat.com
bugzilla at redhat.com
Thu Jul 18 02:18:36 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1730948
Bug ID: 1730948
Summary: [Glusterfs4.1.9] memory leak in fuse mount process.
Product: GlusterFS
Version: 4.1
Hardware: x86_64
OS: Linux
Status: NEW
Component: fuse
Severity: urgent
Assignee: bugs at gluster.org
Reporter: guol-fnst at cn.fujitsu.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Created attachment 1591664
--> https://bugzilla.redhat.com/attachment.cgi?id=1591664&action=edit
The attachment is a memory usage statistic of process 31204.
Description of problem:
Memroy leak in fuse mount process .
Version-Release number of selected component (if applicable):
# glusterd --version
glusterfs 4.1.9
How reproducible:
Creating a Distributed-Disperse(8+3) volume,export a directory through Samba.
Steps to Reproduce:
1.Creating a Distributed-Disperse(8+3) volume,export a directory through Samba.
2.Writing files to the cifs mount point on windows Server2012R2.
3.The fuse mount process consumes more and more memory .
Actual results:
Expected results:
Additional info:
Volume Name: res
Type: Distributed-Disperse
Volume ID: 22d8a737-d902-4860-a9cf-5a2eb5641c1e
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (8 + 3) = 33
Transport-type: tcp
Bricks:
Brick1: 192.168.100.11:/export/sdb_res/res
Brick2: 192.168.100.12:/export/sdb_res/res
Brick3: 192.168.100.13:/export/sdb_res/res
Brick4: 192.168.100.11:/export/sdc_res/res
Brick5: 192.168.100.12:/export/sdc_res/res
Brick6: 192.168.100.13:/export/sdc_res/res
Brick7: 192.168.100.11:/export/sdd_res/res
Brick8: 192.168.100.12:/export/sdd_res/res
Brick9: 192.168.100.13:/export/sdd_res/res
Brick10: 192.168.100.11:/export/sde_res/res
Brick11: 192.168.100.12:/export/sde_res/res
Brick12: 192.168.100.13:/export/sde_res/res
Brick13: 192.168.100.11:/export/sdf_res/res
Brick14: 192.168.100.12:/export/sdf_res/res
Brick15: 192.168.100.13:/export/sdf_res/res
Brick16: 192.168.100.11:/export/sdg_res/res
Brick17: 192.168.100.12:/export/sdg_res/res
Brick18: 192.168.100.13:/export/sdg_res/res
Brick19: 192.168.100.11:/export/sdh_res/res
Brick20: 192.168.100.12:/export/sdh_res/res
Brick21: 192.168.100.13:/export/sdh_res/res
Brick22: 192.168.100.11:/export/sdi_res/res
Brick23: 192.168.100.12:/export/sdi_res/res
Brick24: 192.168.100.13:/export/sdi_res/res
Brick25: 192.168.100.11:/export/sdj_res/res
Brick26: 192.168.100.12:/export/sdj_res/res
Brick27: 192.168.100.13:/export/sdj_res/res
Brick28: 192.168.100.11:/export/sdk_res/res
Brick29: 192.168.100.12:/export/sdk_res/res
Brick30: 192.168.100.13:/export/sdk_res/res
Brick31: 192.168.100.11:/export/sdl_res/res
Brick32: 192.168.100.12:/export/sdl_res/res
Brick33: 192.168.100.13:/export/sdl_res/res
Options Reconfigured:
server.tcp-user-timeout: 3
client.tcp-user-timeout: 5
network.inode-lru-limit: 200000
performance.nl-cache-timeout: 600
performance.nl-cache: on
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.cache-samba-metadata: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.nfs.io-threads: on
performance.nfs.quick-read: on
performance.client-io-threads: on
network.tcp-window-size: 1048567
performance.cache-refresh-timeout: 4
performance.cache-max-file-size: 128MB
performance.rda-cache-limit: 20MB
performance.parallel-readdir: on
cluster.lookup-optimize: on
cluster.heal-timeout: 300
network.ping-timeout: 10
server.event-threads: 11
performance.io-thread-count: 40
performance.read-ahead-page-count: 16
performance.write-behind-window-size: 512MB
performance.cache-size: 4GB
performance.write-behind: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
#cat /etc/samba/smb.conf
use sendfile = no
[testcifs]
path = /mnt/res/file_grp/test
writable = yes
read only = no
guest ok = yes
kernel share modes = no
posix locking = no
map archive = no
map hidden = no
map read only = no
map system = no
store dos attributes = yes
create mode = 0770
directory mode = 2770
map acl inherit = yes
oplocks = yes
level2 oplocks = yes
dos filemode = no
dos filetime resolution = no
fake directory create times = no
dos filetimes = no
csc policy = manual
browseable = yes
[root at node-2 ~]# ps -ef | grep 31204
root 15320 39831 0 Jul17 pts/0 00:00:00 grep --color=auto 31204
root 21121 7093 0 10:12 pts/3 00:00:00 grep --color=auto 31204
root 31204 1 99 Jul17 ? 1-00:05:53 /usr/sbin/glusterfs --acl
--process-name fuse --volfile-server=192.168.100.12 --volfile-id=res /mnt/res
[root at node-2 ~]# top
top - 10:12:30 up 1 day, 14:47, 4 users, load average: 7.65, 7.55, 8.14
Tasks: 714 total, 1 running, 350 sleeping, 3 stopped, 0 zombie
%Cpu(s): 2.4 us, 0.8 sy, 0.0 ni, 96.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 13170401+total, 49424232 free, 76349952 used, 5929832 buff/cache
KiB Swap: 4194300 total, 4194300 free, 0 used. 53584984 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
31204 root 20 0 66.013g 0.064t 9904 S 100.0 52.0 1445:55 glusterfs
21215 root 20 0 168572 5200 3892 R 11.8 0.0 0:00.03 top
12579 it 20 0 443260 24080 20220 S 5.9 0.0 36:31.31 smbd
1 root 20 0 192712 7076 3824 S 0.0 0.0 1:21.12 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:01.07 kthreadd
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00
kworker/0:0H
6 root 20 0 0 0 0 I 0.0 0.0 0:18.93
kworker/u80:0
The attachment is a memory usage statistic of process 31204.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list