[Gluster-users] Memory usage high on server sides
Chris Jin
chris at pikicentral.com
Thu Apr 15 02:22:35 UTC 2010
Hi Krzysztof,
Thanks for your replies. And you are right, the server process should be
glusterfsd. But I did mean servers. After two days copying, the two
processes took almost 70% of the total memory. I am just thinking one
more process will bring our servers down.
$ps auxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 26472 2.2 29.1 718100 600260 ? Ssl Apr09 184:09
glusterfsd -f /etc/glusterfs/servers/r2/f1.vol
root 26485 1.8 39.8 887744 821384 ? Ssl Apr09 157:16
glusterfsd -f /etc/glusterfs/servers/r2/f2.vol
At the meantime, the client side seems OK.
$ps auxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 19692 1.3 0.0 262148 6980 ? Ssl Apr12
61:33 /sbin/glusterfs --log-level=NORMAL
--volfile=/u2/git/modules/shared/glusterfs/clients/r2/c2.vol /gfs/r2/f2
Any ideas?
On Wed, 2010-04-14 at 10:16 +0200, Krzysztof Strasburger wrote:
> On Wed, Apr 14, 2010 at 06:33:15AM +0200, Krzysztof Strasburger wrote:
> > On Wed, Apr 14, 2010 at 09:22:09AM +1000, Chris Jin wrote:
> > > Hi, I got one more test today. The copying has already run for 24 hours
> > > and the memory usage is about 800MB, 39.4% of the total. But there is no
> > > external IP connection error. Is this a memory leak?
> > Seems to be, and a very persistent one. Present in glusterfs at least
> > since version 1.3 (the oldest I used).
> > Krzysztof
> I corrected the subject, as the memory usage is high on the client side
> (glusterfs is the client process, glusterfsd is the server and it never
> used that lot of memory on my site).
> I did some more tests with logging. Accordingly to my old valgrind report,
> huge amounts of memory were still in use at exit, and these were allocated
> in __inode_create and __dentry_create. So I added log points in these functions
> and performed the "du test", ie. mounted the glusterfs directory containing
> a large number of files with log level set to TRACE , ran du on it,
> then echo 3 > /proc/sys/vm/drop_caches, waiting a while until the log file
> stopped growing, finally umounted and checked the (huge) logfile:
> prkom13:~# grep inode_create /var/log/glusterfs/root-loop-test.log |wc -l
> 151317
> prkom13:~# grep inode_destroy /var/log/glusterfs/root-loop-test.log |wc -l
> 151316
> prkom13:~# grep dentry_create /var/log/glusterfs/root-loop-test.log |wc -l
> 158688
> prkom13:~# grep dentry_unset /var/log/glusterfs/root-loop-test.log |wc -l
> 158688
>
> Do you see? Everything seems to be OK, a number of inodes created, 1 less
> destroyed (probably the root inode), same number of dentries created and
> destroyed. The memory should be freed (there are calls to free in inode_destroy
> and dentry_unset functions), but it is not. Any ideas, what is going on?
> Glusterfs developers - is something kept in the lists, where inodes
> and dentries live, and interleaved with these inodes and entries, so that
> no memory page can be unmapped?
> We should also look at the kernel - why it does not send forgets immediately,
> even with drop_caches=3?
> Krzysztof
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list