[Bugs] [Bug 1806823] New: glusterfsd consume big amount of memory in case of multiple client IO, after finish, memory does not drop
bugzilla at redhat.com
bugzilla at redhat.com
Tue Feb 25 05:42:37 UTC 2020
https://bugzilla.redhat.com/show_bug.cgi?id=1806823
Bug ID: 1806823
Summary: glusterfsd consume big amount of memory in case of
multiple client IO, after finish, memory does not drop
Product: GlusterFS
Version: 7
Hardware: x86_64
OS: Linux
Status: NEW
Component: io-threads
Severity: high
Assignee: bugs at gluster.org
Reporter: zz.sh.cynthia at gmail.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Created attachment 1665557
--> https://bugzilla.redhat.com/attachment.cgi?id=1665557&action=edit
export glusterfsd statudump after stop fio and remove fio files
Description of problem:
doing IO from 10 glusterfs clients at the same time, glusterfsd memory
consumption grows, after io finish the glusterfsd memory does not drop anymore.
Version-Release number of selected component (if applicable):
glusterfs7
How reproducible:
Steps to Reproduce:
1.begin IO with fio on 10 glusterfs clients at the same time
2.observe memory usage of glusterfsd
3.finish IO, and remove IO files
Actual results:
glusterfsd memory keeps to be high
# ps -aux | grep glusterfsd| grep export
root 2643 84.8 20.2 2358828 412908 ? Ssl 05:32 5:48
/usr/sbin/glusterfsd -s dbm-0.local --volfile-id
export.dbm-0.local.mnt-bricks-export-brick -p
/var/run/gluster/vols/export/dbm-0.local-mnt-bricks-export-brick.pid -S
/var/run/gluster/bab7bc2d0256ba6a.socket --brick-name /mnt/bricks/export/brick
-l /var/log/glusterfs/bricks/mnt-bricks-export-brick.log --xlator-option
*-posix.glusterd-uuid=17da950d-c7a1-4a01-b139-20f8fb801346 --process-name brick
--brick-port 53954 --xlator-option export-server.listen-port=53954
--xlator-option transport.socket.bind-address=dbm-0.local
[root at dbm-0:/root]
# ps -T -p 2643
PID SPID TTY TIME CMD
2643 2643 ? 00:00:00 glusterfsd
2643 2644 ? 00:00:00 glfs_timer
2643 2645 ? 00:00:00 glfs_sigwait
2643 2646 ? 00:00:00 glfs_memsweep
2643 2647 ? 00:00:00 glfs_sproc0
2643 2648 ? 00:00:00 glfs_sproc1
2643 2649 ? 00:00:00 glusterfsd
2643 2650 ? 00:00:36 glfs_epoll000
2643 2651 ? 00:00:37 glfs_epoll001
2643 3046 ? 00:00:00 glfs_idxwrker
2643 3047 ? 00:00:14 glfs_iotwr000
2643 3050 ? 00:00:00 glfs_clogecon
2643 3051 ? 00:00:00 glfs_clogd000
2643 3054 ? 00:00:00 glfs_clogd001
2643 3055 ? 00:00:00 glfs_clogd002
2643 3060 ? 00:00:00 glfs_posix_rese
2643 3061 ? 00:00:00 glfs_posixhc
2643 3062 ? 00:00:00 glfs_posixctxja
2643 3063 ? 00:00:00 glfs_posixfsy
2643 3081 ? 00:00:20 glfs_rpcrqhnd
2643 3229 ? 00:00:20 glfs_rpcrqhnd
2643 3334 ? 00:00:14 glfs_iotwr001
2643 3335 ? 00:00:14 glfs_iotwr002
2643 3992 ? 00:00:14 glfs_iotwr003
2643 3995 ? 00:00:14 glfs_iotwr004
2643 4004 ? 00:00:14 glfs_iotwr005
2643 4005 ? 00:00:14 glfs_iotwr006
2643 4006 ? 00:00:14 glfs_iotwr007
2643 4016 ? 00:00:14 glfs_iotwr008
2643 4017 ? 00:00:14 glfs_iotwr009
2643 4019 ? 00:00:14 glfs_iotwr00a
2643 4020 ? 00:00:14 glfs_iotwr00b
2643 4021 ? 00:00:14 glfs_iotwr00c
2643 4022 ? 00:00:14 glfs_iotwr00d
2643 4023 ? 00:00:14 glfs_iotwr00e
2643 4024 ? 00:00:14 glfs_iotwr00f
Expected results:
after remove fio created files, the glusterfsd memory size will go down to
before.
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list