[Gluster-users] Old story - glusterfs memory usage

Krzysztof Strasburger strasbur at chkw386.ch.pwr.wroc.pl
Mon Apr 12 14:52:26 UTC 2010


Hello again,
I repeated my "du test", causing excessive memory allocations by the
glusterfs client, with log level set to TRACE and a few additional
points of logging added within inode.c. After each forget(), for example:
[fuse-bridge.c:477:fuse_forget] glusterfs-fuse: got forget on inode (27796080)
a call to __inode_destroy follows (this is my added log point):
[inode.c:243:__inode_destroy] glusterfs-inode: inode_destroy (27796080)
but the memory usage does not decrease. WTH? Are there other scattered 
data on the same lists, preventing unmapping of memory pages by glibc?

BTW, forgets are not sent during the test, independently on the actual
value of drop_cache. They start to arrive after setting it to 3 manually,
even if it was also 3 before! So it seems that the kernel (2.6.32) behaves in an 
insane way and glusterfs has its dark mystery too.
And it is also clear, why the CPU usage goes often up to 30-40%
on a C2D CPU. The inodes are managed by lists and going through these
may become rather slow, when hundreds of thousands of (in fact unused)
entries are kept there.
Regards
Krzysztof



More information about the Gluster-users mailing list