[Gluster-devel] io-cache exceeding cache-size value

Dan Parsons dparsons at nyip.net
Tue Feb 24 22:58:01 UTC 2009


Okay, here's the info you've requested. I'm not entirely sure what to make
of it, but I hope you can tell me what it means. Note that after doing the
echo 3> thing, the memory usage of glusterfs stayed the same (I think that
was expected, but I'm just mentioning it).



right after gluster start:

Arena 0:
system bytes     =     135168
in use bytes     =     102512
Arena 1:
system bytes     =     135168
in use bytes     =       3696
Total (incl. mmap):
system bytes     =    2174976
in use bytes     =    2010848
max mmap regions =          4
max mmap bytes   =    1904640

once gluster hit cache-size, as reported by top (3GB):
Arena 0:
system bytes     =  895643648
in use bytes     =  851334192
Arena 1:
system bytes     =   65482752
in use bytes     =   64808592
Arena 2:
system bytes     =  482492416
in use bytes     =  482344656
Total (incl. mmap):
system bytes     = 3258667008
in use bytes     = 3213535632
max mmap regions =      23814
max mmap bytes   = 3220254720

with gluster at 4.1GB
Arena 0:
system bytes     = 2262704128
in use bytes     = 2087039296
Arena 1:
system bytes     =  938864640
in use bytes     =  329595824
Arena 2:
system bytes     = 1312215040
in use bytes     =  750181168
Total (incl. mmap):
system bytes     =  220721152
in use bytes     = 3168720928
max mmap regions =      23814
max mmap bytes   = 3220254720

with gluster at 4.1GB, after echo 3 > /proc/sys/vm/drop_caches:

Arena 0:
system bytes     = 2510168064
in use bytes     =  858861680
Arena 1:
system bytes     =  938864640
in use bytes     =  127221808
Arena 2:
system bytes     = 1280753664
in use bytes     =   72664640
Total (incl. mmap):
system bytes     =  436723712
in use bytes     = 1060652768
max mmap regions =      23814
max mmap bytes   = 3220254720











Dan


On Mon, Feb 23, 2009 at 3:23 PM, Amar Tumballi (bulde) <amar at gluster.com>wrote:

> Hi Gordan and Dan,
>
> It would help me a lot if its possible for you to get the info as described
> below,
>
> compile glusterfs like
>
> bash# make clean > /dev/null
> bash# make CFLAGS="-g -O0 -DDEBUG" > /dev/null
> bash# make install
>
> run the process which consumes memory (mostly client process) like below:
>
> bash# glusterfs <any argument you give generally> -N
> <this process will run in foreground now>
>
> Open another terminal
>
> bash# ps aux | grep glusterfs
> bash# kill -s SIGUSR1 <pid of glusterfs -N process>
> <Check in other terminal for memory usage stats>
>
> bash# <run your application over glusterfs as you do till you get high
> memory usage of glusterfs.. >
> bash# kill -s SIGUSR1 <pid of glusterfs -N process>
> <Check the stat in another terminal>
>
> bash# echo 3 > /proc/sys/vm/drop_caches
> bash# kill -s SIGUSR1 <pid of glusterfs -N process>
> <Check the stat in another terminal>
>
> Even after dropping caches, if 'in use bytes =' in malloc stats shows a
> high value, then it is a leak. If its showing less, but just 'system bytes =
> ' is a high value, this means glusterfs is not really consuming high memory,
> but the problem is really in the memory allocation segments.
>
> Regards,
> Amar
>
> NOTE: 'malloc_stats' will be printed to 'stdout' if we enable -DDEBUG while
> compiling glusterfs, as it hits performance badly otherwise.
>
>
> 2009/2/23 Gordan Bobic <gordan at bobich.net>
>
> Dan Parsons wrote:
>>
>>> I'm having an issue with glusterfs exceeding its cache-size value. Right
>>> now I have it set to 4000MB and I've seen it climb as high as 4800MB. If I
>>> set it to 5000, I've seen it go as high as 6000MB. This is a problem because
>>> it causes me to set the value very low so that my apps don't get pushed into
>>> swap. Is there any way to fix this? To get it to stick to the limit I set
>>> and not exceed?
>>>
>>
>> It's possible you are running into the same memory leak that I'm seeing,
>> and I'm not using io-cache or any other performance translators at all. With
>> rootfs on Gluster, doing a kernel compile (kernel source tree being on NFS,
>> so this won't be contributing to the bloat, hopefully) makes the glusterfsd
>> bloat by about 80MB per pass, and never frees it.
>>
>> Gordan
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Amar Tumballi
> Gluster/GlusterFS Hacker
> [bulde on #gluster/irc.gnu.org]
> http://www.zresearch.com - Commoditizing Super Storage!
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090224/c51e466a/attachment-0003.html>


More information about the Gluster-devel mailing list