[Gluster-devel] [Gluster-users] GlusterFS v3.7.8 client leaks summary — part II
Mathieu Chateau
mathieu.chateau at lotp.fr
Sun Feb 21 12:12:31 UTC 2016
Hello,
when will these patch be included in gluster official version ?
I upgraded my client to 3.7.8, but still have big leak following rsync jobs
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1570 root 20 0 7404740 *6.210g* 4108 S 0.0 *40.0* 106:24.02
glusterfs
1573 root 20 0 3796044 2.924g 3580 S 0.0 18.8 7:07.05
glusterfs
1571 root 20 0 2469924 1.695g 3588 S 0.0 10.9 1:19.75
glusterfs
thanks
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-02-16 18:54 GMT+01:00 Soumya Koduri <skoduri at redhat.com>:
>
>
> On 02/16/2016 08:06 PM, Oleksandr Natalenko wrote:
>
>> Hmm, OK. I've rechecked 3.7.8 with the following patches (latest
>> revisions):
>>
>> ===
>> Soumya Koduri (3):
>> gfapi: Use inode_forget in case of handle objects
>> inode: Retire the inodes from the lru list in inode_table_destroy
>> rpc: Fix for rpc_transport_t leak
>> ===
>>
>> Here is Valgrind output: [1]
>>
>> It seems that all leaks are gone, and that is very nice.
>>
>
> At least major chunk of leaks seem to have gone. Many thanks to you too
> for very detailed tests and analysis :)
>
> -Soumya
>
>
>
>> Many thanks to all devs.
>>
>> [1] https://gist.github.com/anonymous/eddfdaf3eb7bff458326
>>
>> 16.02.2016 15:30, Soumya Koduri wrote:
>>
>>> I have tested using your API app (I/Os done - create,write and stat).
>>> I still do not see any inode related leaks. However I posted another
>>> fix for rpc_transport object related leak [1].
>>>
>>> I request you to re-check if you have the latest patch of [2] applied
>>> in your build.
>>>
>>> [1] http://review.gluster.org/#/c/13456/
>>> [2] http://review.gluster.org/#/c/13125/
>>>
>> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160221/af7303d7/attachment.html>
More information about the Gluster-devel
mailing list