[Gluster-users] Run away memory with gluster mount
Dan Ragle
daniel at Biblestuph.com
Thu Jan 25 16:04:03 UTC 2018
Having a memory issue with Gluster 3.12.4 and not sure how to
troubleshoot. I don't *think* this is expected behavior. This is on an
updated CentOS 7 box. The setup is a simple two node replicated layout
where the two nodes act as both server and client. The volume in
question: Volume Name: GlusterWWW Type: Replicate Volume ID:
8e9b0e79-f309-4d9b-a5bb-45d065faaaa3 Status: Started Snapshot Count: 0
Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1:
vs1dlan.mydomain.com:/glusterfs_bricks/brick1/www Brick2:
vs2dlan.mydomain.com:/glusterfs_bricks/brick1/www Options Reconfigured:
nfs.disable: on cluster.favorite-child-policy: mtime
transport.address-family: inet I had some other performance options in
there, (increased cache-size, md invalidation, etc) but stripped them
out in an attempt to isolate the issue. Still got the problem without
them. The volume currently contains over 1M files. When mounting the
volume, I get (among other things) a process as such:
/usr/sbin/glusterfs --volfile-server=localhost --volfile-id=/GlusterWWW
/var/www This process begins with little memory, but then as files are
accessed in the volume the memory increases. I setup a script that
simply reads the files in the volume one at a time (no writes). It's
been running on and off about 12 hours now and the resident memory of
the above process is already at 7.5G and continues to grow slowly. If I
stop the test script the memory stops growing, but does not reduce.
Restart the test script and the memory begins slowly growing again. This
is obviously a contrived app environment. With my intended application
load it takes about a week or so for the memory to get high enough to
invoke the oom killer. Is there potentially something misconfigured
here? Thanks, Dan Ragle daniel at Biblestuph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180125/fb6964be/attachment.html>
More information about the Gluster-users
mailing list