[Gluster-users] Gluster eating up a lot of ram

Diego Remolina dijuremo at gmail.com
Fri Mar 1 22:09:14 UTC 2019


I am using glusterfs with two servers as a file server sharing files via
samba and ctdb. I cannot use samba vfs gluster plugin, due to bug in
current Centos version of samba. So I am mounting via fuse and exporting
the volume to samba from the mount point.

Upon initial boot, the server where samba is exporting files climbs up to
~10GB RAM within a couple hours of use. From then on, it is a constant slow
memory increase. In the past with gluster 3.8.x we had to reboot the
servers at around 30 days . With gluster 4.1.6 we are getting up to 48
days, but RAM use is at 48GB out of 64GB. Is this normal?

The particular versions are below,

[root at ysmha01 home]# uptime
16:59:39 up 48 days,  9:56,  1 user,  load average: 3.75, 3.17, 3.00
[root at ysmha01 home]# rpm -qa | grep gluster
centos-release-gluster41-1.0-3.el7.centos.noarch
glusterfs-server-4.1.6-1.el7.x86_64
glusterfs-api-4.1.6-1.el7.x86_64
centos-release-gluster-legacy-4.0-2.el7.centos.noarch
glusterfs-4.1.6-1.el7.x86_64
glusterfs-client-xlators-4.1.6-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.8.x86_64
glusterfs-fuse-4.1.6-1.el7.x86_64
glusterfs-libs-4.1.6-1.el7.x86_64
glusterfs-rdma-4.1.6-1.el7.x86_64
glusterfs-cli-4.1.6-1.el7.x86_64
samba-vfs-glusterfs-4.8.3-4.el7.x86_64
[root at ysmha01 home]# rpm -qa | grep samba
samba-common-tools-4.8.3-4.el7.x86_64
samba-client-libs-4.8.3-4.el7.x86_64
samba-libs-4.8.3-4.el7.x86_64
samba-4.8.3-4.el7.x86_64
samba-common-libs-4.8.3-4.el7.x86_64
samba-common-4.8.3-4.el7.noarch
samba-vfs-glusterfs-4.8.3-4.el7.x86_64
[root at ysmha01 home]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)

RAM view using top
Tasks: 398 total,   1 running, 397 sleeping,   0 stopped,   0 zombie
%Cpu(s):  7.0 us,  9.3 sy,  1.7 ni, 71.6 id,  9.7 wa,  0.0 hi,  0.8 si,
0.0 st
KiB Mem : 65772000 total,  1851344 free, 60487404 used,  3433252 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  3134316 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 9953 root      20   0 3727912 946496   3196 S 150.2  1.4  38626:27
glusterfsd
 9634 root      20   0   48.1g  47.2g   3184 S  96.3 75.3  29513:55
glusterfs
14485 root      20   0 3404140  63780   2052 S  80.7  0.1   1590:13
glusterfs

[root at ysmha01 ~]# gluster v status export
Status of volume: export
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.0.1.7:/bricks/hdds/brick           49157     0          Y
 13986
Brick 10.0.1.6:/bricks/hdds/brick           49153     0          Y
 9953
Self-heal Daemon on localhost               N/A       N/A        Y
 14485
Self-heal Daemon on 10.0.1.7                N/A       N/A        Y
 21934
Self-heal Daemon on 10.0.1.5                N/A       N/A        Y
 4598

Task Status of Volume export
------------------------------------------------------------------------------
There are no active volume tasks




<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190301/ce8a9a9f/attachment.html>


More information about the Gluster-users mailing list