[Gluster-users] gluster 3.12.8 fuse consume huge memory

Nithya Balachandran nbalacha at redhat.com
Fri Aug 31 04:25:18 UTC 2018


Hi,

Please take statedumps of the 3.12.13 client process at intervals when the
memory is increasing and send those across.
We will also need the gluster volume info for the volume int question.

Thanks,
Nithya


On 31 August 2018 at 08:32, huting3 <huting3 at corp.netease.com> wrote:

> Thanks for your reply, I also test gluster 3.12.13 and found the client
> also consumes huge memory:
>
> PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
> 180095 root      20   0 4752256 4.091g   4084 S  43.5  1.6  17:54.70
> glusterfs
>
> I read and write some files on the gluster fuse client, the client consume
> 4g memory and it keeps arising.
>
> Does it really fixed in 3.12.13?
>
> huting3
> huting3 at corp.netease.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?&name=huting3&uid=huting3%40corp.netease.com&ftlId=1&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22huting3%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> On 08/30/2018 22:37,Darrell Budic<budic at onholyground.com>
> <budic at onholyground.com> wrote:
>
> It’s probably https://bugzilla.redhat.com/show_bug.cgi?id=1593826,
> although I did not encounter it in 3.12.8, only 3.12.9 - 12.
>
> It’s fixed in 3.12.13.
>
> ------------------------------
> *From:* huting3 <huting3 at corp.netease.com>
> *Subject:* [Gluster-users] gluster 3.12.8 fuse consume huge memory
> *Date:* August 30, 2018 at 2:02:01 AM CDT
> *To:* gluster-users at gluster.org
>
> The version of glusterfs I installed is 3.12.8, and I find it`s client
> also consume huge memory.
>
>
> I dumped the statedump file, and I found the size of a variable is
> extreamly huge like below:
>
>
> [mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]
>
>   49805 size=4250821416
>
>   49806 num_allocs=1
>
>   49807 max_size=4294960048
>
>   49808 max_num_allocs=3
>
>   49809 total_allocs=12330719
>
>
> Is it means a memory leak exist in glusterfs client?
>
> huting3
> huting3 at corp.netease.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?&name=huting3&uid=huting3%40corp.netease.com&ftlId=1&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22huting3%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180831/8163a29f/attachment.html>


More information about the Gluster-users mailing list