[Bugs] [Bug 1126831] Memory leak in GlusterFs client
bugzilla at redhat.com
bugzilla at redhat.com
Wed Oct 28 04:52:05 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1126831
Wade Fitzpatrick <wade.fitzpatrick at gmail.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |wade.fitzpatrick at gmail.com
--- Comment #4 from Wade Fitzpatrick <wade.fitzpatrick at gmail.com> ---
I think we have hit this bug too in gluster-3.7.5. There is nothing interesting
in the client logs.
core at comet ~ $ top
top - 13:07:04 up 1 day, 19:03, 1 user, load average: 0.30, 0.23, 0.22
Tasks: 160 total, 2 running, 158 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.7 us, 0.7 sy, 0.0 ni, 95.8 id, 0.1 wa, 0.1 hi, 0.6 si, 0.0 st
KiB Mem: 32978184 total, 32746628 used, 231556 free, 105032 buffers
KiB Swap: 16777212 total, 220340 used, 16556872 free. 1002344 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
742 root 20 0 29.179g 0.028t 3404 S 78.8 91.2 412:29.02 glusterfs
706 root 20 0 810552 598904 4780 S 32.8 1.8 83:50.18 fleetd
902 root 20 0 574400 402476 8552 R 26.3 1.2 193:03.93 ringcap
26527 root 20 0 18900 4104 1636 S 13.1 0.0 0:00.38 rsync
755 root 20 0 869676 60408 6644 S 6.6 0.2 4:54.66 node
We have 93 identical client hosts but only this one (which runs dropboxd on
ext4 and rsyncs data to the gluster volume) has exhibited the memory leak. I
have set quick-read and io-cache off on the volume so will see how it
progresses.
# gluster volume info static
Volume Name: static
Type: Striped-Replicate
Volume ID: 3f9f810d-a988-4914-a5ca-5bd7b251a273
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: james:/data/gluster1/static/brick1
Brick2: cupid:/data/gluster1/static/brick2
Brick3: hilton:/data/gluster1/static/brick3
Brick4: present:/data/gluster1/static/brick4
Options Reconfigured:
performance.quick-read: off
performance.io-cache: off
features.scrub: Active
features.bitrot: on
performance.readdir-ahead: on
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
changelog.rollover-time: 10
changelog.fsync-interval: 3
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list