[Gluster-devel] Monotonically increasing memory

Anders Blomdell anders.blomdell at control.lth.se
Fri Aug 1 07:07:37 UTC 2014


On 2014-08-01 08:56, Pranith Kumar Karampuri wrote:
> 
> On 08/01/2014 12:09 PM, Anders Blomdell wrote:
>> On 2014-08-01 02:02, Harshavardhana wrote:> On Thu, Jul 31, 2014 at 11:31 AM, Anders Blomdell
>>> <anders.blomdell at control.lth.se> wrote:
>>>> During rsync of 350000 files, memory consumption of glusterfs
>>>> rose to 12 GB (after approx 14 hours), I take it that this is a
>>>> bug I should try to track down?
>>>>
>>> Does it ever come down? what happens if you repeatedly the same files
>>> again? does it OOM?
>> Well, it OOM'd my firefox first (that's how good I monitor my experiments :-()
>> No, memory usage does not come down by itself, ACAICT
>>
>>
>> On 2014-08-01 02:12, Raghavendra Gowdappa wrote:> Anders,
>>> Mostly its a case of memory leak. It would be helpful if you can file a bug on this. Following information would be useful to fix the issue:
>>>
>>> 1. valgrind reports (if possible).
>>>   a. To start brick and nfs processes with valgrind you can use following cmdline when starting glusterd.
>>>      # glusterd --xlator-option *.run-with-valgrind=yes
>>>
>>>      In this case all the valgrind logs can be found in standard glusterfs log directory.
>>>
>>>   b. For client you can start glusterfs just like any other process in valgrind. Since glusterfs is daemonized, while running with valgrind we need to prevent it by running it in foreground. We can use -N option to do that
>>>      # valgrind --leak-check=full --log-file=<path-to-valgrind-log> glusterfs --volfile-id=xyz --volfile-server=abc -N /mnt/glfs
>>>
>>> 2. Once you observe a considerable leak in memory, please get a statedump of glusterfs
>>>
>>>    # gluster volume statedump <volname>
>>>
>>> and attach the reports in the bug.
>> Since it looks like Pranith has a clue, I'll leave it for a few weeks (other
>> pressing duties).
>>
>> On 2014-08-01 03:24, Pranith Kumar Karampuri wrote:
>>> Yes, even I saw the following leaks, when I tested it a week back. These were the leaks:
>>> You should probably take a statedump and see what datatypes are leaking.
>>>
>>> root at localhost - /usr/local/var/run/gluster
>>> 14:10:26 ? awk -f /home/pk1/mem-leaks.awk glusterdump.22412.dump.1406174043
>>> [mount/fuse.fuse - usage-type gf_common_mt_char memusage]
>>> size=341240
>>> num_allocs=23602
>>> max_size=347987
>>> max_num_allocs=23604
>>> total_allocs=653194
>>> ...
>> I'll revisit this in a few weeks,
>>
>> Harshavardhana, Raghavendra, Pranith (and all others),
>>
>> Gluster is one of the most responsive Open Source project I
>> have participated in this far, I'm very happy with all support,
>> help and encouragement I have got this far. Even though my initial
>> tests weren't fully satisfactory, you are the main reason for
>> my perseverance :-)
> Yay! good :-). Do you have any suggestions where we need to improve as 
> a community that would make it easier for new contributors?
http://review.gluster.org/#/c/8181/ (will hopefully come around and
review that, real soon now...)

Otherwise, no. Will recommend gluster as an eminent crash-course in 
git, gerrit and continous integration. Keep up the good work.

/Anders 

-- 
Anders Blomdell                  Email: anders.blomdell at control.lth.se
Department of Automatic Control
Lund University                  Phone:    +46 46 222 4625
P.O. Box 118                     Fax:      +46 46 138118
SE-221 00 Lund, Sweden



More information about the Gluster-devel mailing list