[Gluster-devel] Memory leak
Brent A Nelson
brent at phys.ufl.edu
Wed Mar 7 22:55:26 UTC 2007
With no performance translators at all, I still get what appears to be
memory leakage, but this time when doing metadata-heavy tasks. With
copies going of /usr to the GlusterFS, glusterfs slowly increases memory
consumption and glusterfsd consumes memory at a much more rapid pace (but
both are much slower than the glusterfs leakage reported earlier):
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3047 root 16 0 166m 165m 712 S 42 4.2 11:51.19 glusterfsd
3068 root 15 0 25344 23m 748 S 36 0.6 11:23.24 glusterfs
Heavy data operations don't cause noticeable increases in memory
consumption of either process in this setup.
Thanks,
Brent
On Tue, 6 Mar 2007, Brent A Nelson wrote:
> I've narrowed the observed memory leak down to the read-ahead translator. I
> can apply stat-prefetch and write-behind without triggering the leak in my
> simple test, but read-ahead will cause memory consumption in the glusterfs
> process to slowly increase for a little while and then suddenly start
> increasing very rapidly.
>
> Thanks,
>
> Brent
>
> On Tue, 6 Mar 2007, Brent A Nelson wrote:
>
>> I can reproduce the memory leak in the glusterfs process even with just two
>> disks from two nodes unified (it doesn't just occur with mirroring or
>> striping), at least when all performance translators are used except for
>> io-threads (io-threads causes my dd writes to die right away).
>>
>> I have 2 nodes, with glusterfs unifying one disk from each node. Each node
>> is also a client. I do a dd on each node, simultaneously, with no problem:
>> node1: dd if=/dev/zero of=/phys/blah0 bs=10M count=1024
>> node2: dd if=/dev/zero of=/phys/blah1 bs=10M count=1024
>>
>> When doing a read on each node simultaneously, however, things go along for
>> awhile, but then glusterfs starts consuming more and more memory until it
>> presumably runs out and ultimately dies or becomes useless.
>>
>> Can anyone else confirm? And has anyone gotten io-threads to work at all?
>>
>> These systems are running Ubuntu Edgy, with just the generic kernel and
>> Fuse 2.6.3 applied.
>>
>> Thanks,
>>
>> Brent
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
More information about the Gluster-devel
mailing list