[Bugs] [Bug 1593884] New: glusterfs-fuse 3.12.9/10 high memory consumption
bugzilla at redhat.com
bugzilla at redhat.com
Thu Jun 21 18:46:11 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1593884
Bug ID: 1593884
Summary: glusterfs-fuse 3.12.9/10 high memory consumption
Product: GlusterFS
Version: 3.12
Component: fuse
Assignee: bugs at gluster.org
Reporter: d.webb at hush.com
CC: bugs at gluster.org
Created attachment 1453582
--> https://bugzilla.redhat.com/attachment.cgi?id=1453582&action=edit
Gluster dump of client fuse process
Description of problem:
Gluster-fuse mount process is consuming large amounts of memory over a
relatively short period of time (consuming GBs over a day) on a mount < 100MB
but with lots of churn.
Version-Release number of selected component (if applicable):
# Client side
glusterfs-3.12.10-1.el7.x86_64
glusterfs-client-xlators-3.12.10-1.el7.x86_64
glusterfs-libs-3.12.10-1.el7.x86_64
glusterfs-fuse-3.12.10-1.el7.x86_64
# Server Side:
glusterfs-cli-3.12.10-1.el7.x86_64
glusterfs-3.12.10-1.el7.x86_64
glusterfs-fuse-3.12.10-1.el7.x86_64
glusterfs-libs-3.12.10-1.el7.x86_64
glusterfs-api-3.12.10-1.el7.x86_64
glusterfs-client-xlators-3.12.10-1.el7.x86_64
glusterfs-server-3.12.10-1.el7.x86_64
# Volume setup:
# 3 node 3 brick replica, has KahaDB files for activeMQ on it, mount itself is
only a few MB used:
node-001:/gv_activemq 250G 84M 250G 1% /mnt/amq_broker
# vol info:
Volume Name: gv_activemq
Type: Replicate
Volume ID: d3003a1d-b07e-4996-998d-6cbe26a587e2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node-001:/opt/gluster_storage/gv_activemq/brick
Brick2: node-002:/opt/gluster_storage/gv_activemq/brick
Brick3: node-003:/opt/gluster_storage/gv_activemq/brick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.readdir-ahead: off
network.ping-timeout: 5
performance.cache-size: 1GB
How reproducible:
Memory utilisation seems to be a problem in at least 3.12.9 and 10 (I've
upgraded from .9 to .10 in hopes that this would fix it). I've got another
cluster running 3.12.6 whose client doesn't have the same issue and is running
the other end of this ActiveMQ cluster). So looks to have been introduced
since then.
Steps to Reproduce:
1. ?
2.
3.
Expected results:
Additional info:
# 3.12.6-1 usage on a much busier mount:
43489 root 20 0 681796 53828 4328 S 0.3 0.3 1624:11 glusterfs
node-001:/gv_amq_broker 200G 36G 165G 18% /opt/amq_broker
# 3.12.9/10 on a similar mount with less traffic:
48376 root 20 0 5164844 4.038g 4460 S 5.4 26.0 32:11.30 glusterfs
node-001:/gv_activemq 250G 84M 250G 1% /mnt/amq_broker
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list