[Gluster-users] Old story - glusterfs memory usage

Amar Tumballi amar at gluster.com
Fri Mar 26 05:39:39 UTC 2010


Hi,

I copied then the /usr directory to /root/loop-test (c.a. 160000 files).
> And then ran "du /root/loop-test".
> Memory usage of respective glusterfs process went up from 16 MB to 50 MB.
>

Ok,


> I could reproduce it perfectly by umounting /root/loop-test, mounting it
> again and re-running "du".
> More files touched mean more memory used by glusterfs.
> This is not a memory leak. Repeating this "du" does not cause memory
> usage go even a single byte up.


This is expected.


> * Glusterfs client keeps somewhere an
> information about _every file touched_,
>
* and keeps it _forever_.


Both comments made above are wrong.

GlusterFS keeps the inode table entries (ie, dentries) keeping 1:1 map
between what kernel VFS has in its memory.

It gets free'd up when kernel sends 'forget()' on inode (it sends forget for
each and every inode it has in memory). It does it automatically as and when
memory usage increases.

To send forceful forgets to glusterfs, do the following.

bash# echo 3 > /proc/sys/vm/drop_caches

and see the memory usage after this.

As the situation
> did not improve since the times of glusterfs 1.3, I assume this behavior
> to be a part of its design.


As explained above, its not part of design, but a filesystem implementation
requirement.

-Amar


More information about the Gluster-users mailing list