[Gluster-users] Old story - glusterfs memory usage

Krzysztof Strasburger strasbur at chkw386.ch.pwr.wroc.pl
Fri Mar 26 06:15:03 UTC 2010


On Fri, Mar 26, 2010 at 11:09:39AM +0530, Amar Tumballi wrote:
> > And then ran "du /root/loop-test".
> > Memory usage of respective glusterfs process went up from 16 MB to 50 MB.
> Ok,
Not ok... It goes up to hundreds of MBs with larger number of files.
> > This is not a memory leak. Repeating this "du" does not cause memory
> > usage go even a single byte up.
> This is expected.
I agree here.
> > * Glusterfs client keeps somewhere an
> > information about _every file touched_,
> >
> * and keeps it _forever_.
> Both comments made above are wrong.
Amar, I would be happy if I would be wrong here and a simple solution would
exist. IMO glusterfs is a great project and it already saved our data
twice, when the (less than 1 year old) disks decided to die unexpectedly.
> 
> GlusterFS keeps the inode table entries (ie, dentries) keeping 1:1 map
> between what kernel VFS has in its memory.
> 
> It gets free'd up when kernel sends 'forget()' on inode (it sends forget for
> each and every inode it has in memory). It does it automatically as and when
> memory usage increases.
Could you please tell me, which function in glusterfs handles the 'forget()'
request?
> 
> To send forceful forgets to glusterfs, do the following.
> 
> bash# echo 3 > /proc/sys/vm/drop_caches
> 
> and see the memory usage after this.
I did it almost one year ago, as recommended by a member of the Gluster team,
and repeated yesterday, on Raghavednra's request. Nothing happens,
the memory usage stays as high as it was before.
The question I'm asking and never got an answer sounds as follows:
What happens on your site, when you repeat the "du" test (ls -R works as well)?
If it would work as you claim, then the real reason of my problems would be
hidden somewhere in my system setup, not in glusterfs.
The test setup I posted yesterday is trivial - you even do not need a server,
a directory is sufficient.
Regards
Krzysztof



More information about the Gluster-users mailing list