[Gluster-users] Memory leak with a replica 3 arbiter 1 configuration
Benjamin Edgar
benedgar8 at gmail.com
Mon Aug 22 20:52:48 UTC 2016
Hi,
I appear to have a memory leak with a replica 3 arbiter 1 configuration of
gluster. I have a data brick and an arbiter brick on one server, and
another server with the last data brick. The more I write files to gluster
in this configuration, the more memory the arbiter brick process takes up.
I am able to reproduce this issue by first setting up a replica 3 arbiter 1
configuration and then using the following bash script to create 10,000
200kB files, delete those files, and run forever:
while true ; do
for i in {1..10000} ; do
dd if=/dev/urandom bs=200K count=1 of=$TEST_FILES_DIR/file$i
done
rm -rf $TEST_FILES_DIR/*
done
$TEST_FILES_DIR is a location on my gluster mount.
After about 3 days of this script running on one of my clusters, this is
what the output of "top" looks like:
PID USER PR NI VIRT RES SHR S %CPU %MEM
TIME+ COMMAND
16039 root 20 0 1397220 77720 3948 S 20.6 1.0
860:01.53 glusterfsd
13174 root 20 0 1395824 112728 3692 S 19.6 1.5
806:07.17 glusterfs
19961 root 20 0 2967204 *2.145g* 3896 S 17.3 29.0
752:10.70 glusterfsd
As you can see one of the brick processes is using over 2 gigabytes of
memory.
One work-around for this is to kill the arbiter brick process and restart
the gluster daemon. This restarts arbiter brick process and its memory
usage goes back down to a reasonable level. However I would rather not kill
the arbiter brick every week for production environments.
Has anyone seen this issue before and is there a known work-around/fix?
Thanks,
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160822/75f0a083/attachment.html>
More information about the Gluster-users
mailing list