[Gluster-devel] gluster fuse comsumes huge memory

Raghavendra Gowdappa rgowdapp at redhat.com
Thu Aug 9 05:13:44 UTC 2018


On Thu, Aug 9, 2018 at 10:36 AM, huting3 <huting3 at corp.netease.com> wrote:

> grep count will ouput nothing, so I grep size, the results are:
>
> $ grep itable glusterdump.109182.dump.1533730324 | grep lru | grep size
> xlator.mount.fuse.itable.lru_size=191726
>

Kernel is holding too many inodes in its cache. What's the data set like?
Do you've too many directories? How many files do you have?


> $ grep itable glusterdump.109182.dump.1533730324 | grep active | grep size
> xlator.mount.fuse.itable.active_size=17
>
>
> huting3
> huting3 at corp.netease.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?&name=huting3&uid=huting3%40corp.netease.com&ftlId=1&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22huting3%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
> 签名由 网易邮箱大师 <http://mail.163.com/dashi/> 定制
>
> On 08/9/2018 12:36,Raghavendra Gowdappa<rgowdapp at redhat.com>
> <rgowdapp at redhat.com> wrote:
>
> Can you get the output of following cmds?
>
> # grep itable <statedump> | grep lru | grep count
>
> # grep itable <statedump> | grep active | grep count
>
> On Thu, Aug 9, 2018 at 9:25 AM, huting3 <huting3 at corp.netease.com> wrote:
>
>> Yes, I got the dump file and found there are many huge num_allocs just
>> like following:
>>
>> I found memusage of 4 variable types are extreamly huge.
>>
>>  [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]
>> size=47202352
>> num_allocs=2030212
>> max_size=47203074
>> max_num_allocs=2030235
>> total_allocs=26892201
>>
>> [protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]
>> size=24362448
>> num_allocs=2030204
>> max_size=24367560
>> max_num_allocs=2030226
>> total_allocs=17830860
>>
>> [mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]
>> size=2497947552
>> num_allocs=4578229
>> max_size=2459135680
>> max_num_allocs=7123206
>> total_allocs=41635232
>>
>> [mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]
>> size=4038730976
>> num_allocs=1
>> max_size=4294962264
>> max_num_allocs=37
>> total_allocs=150049981
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?&name=huting3&uid=huting3%40corp.netease.com&ftlId=1&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22huting3%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
>>
>>
>>
>> huting3
>> huting3 at corp.netease.com
>>
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?&name=huting3&uid=huting3%40corp.netease.com&ftlId=1&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22huting3%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
>> 签名由 网易邮箱大师 <http://mail.163.com/dashi/> 定制
>>
>> On 08/9/2018 11:36,Raghavendra Gowdappa<rgowdapp at redhat.com>
>> <rgowdapp at redhat.com> wrote:
>>
>>
>>
>> On Thu, Aug 9, 2018 at 8:55 AM, huting3 <huting3 at corp.netease.com> wrote:
>>
>>> Hi expert:
>>>
>>> I meet a problem when I use glusterfs. The problem is that the fuse
>>> client consumes huge memory when write a   lot of files(>million) to the
>>> gluster, at last leading to killed by OS oom. The memory the fuse process
>>> consumes can up to 100G! I wonder if there are memory leaks in the gluster
>>> fuse process, or some other causes.
>>>
>>
>> Can you get statedump of fuse process consuming huge memory?
>>
>>
>>> My gluster version is 3.13.2, the gluster volume info is listed as
>>> following:
>>>
>>> Volume Name: gv0
>>> Type: Distributed-Replicate
>>> Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 19 x 3 = 57
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick25: dl26.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick26: dl27.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick27: dl28.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick28: dl29.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick29: dl30.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick30: dl31.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick31: dl32.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick32: dl33.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick33: dl34.dg.163.org:/glusterfs_brick/brick2/gv0
>>> Brick34: dl23.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick35: dl24.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick36: dl25.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick37: dl26.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick38: dl27.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick39: dl28.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick40: dl29.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick41: dl30.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick42: dl31.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick43: dl32.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick44: dl33.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick45: dl34.dg.163.org:/glusterfs_brick/brick3/gv0
>>> Brick46: dl0.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick47: dl1.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick48: dl2.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick49: dl3.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick50: dl5.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick51: dl6.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick52: dl9.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick53: dl10.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick54: dl11.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick55: dl12.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick56: dl13.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Brick57: dl14.dg.163.org:/glusterfs_brick/brick1/gv0
>>> Options Reconfigured:
>>> performance.cache-size: 10GB
>>> performance.parallel-readdir: on
>>> performance.readdir-ahead: on
>>> network.inode-lru-limit: 200000
>>> performance.md-cache-timeout: 600
>>> performance.cache-invalidation: on
>>> performance.stat-prefetch: on
>>> features.cache-invalidation-timeout: 600
>>> features.cache-invalidation: on
>>> features.inode-quota: off
>>> features.quota: off
>>> cluster.quorum-reads: on
>>> cluster.quorum-count: 2
>>> cluster.quorum-type: fixed
>>> transport.address-family: inet
>>> nfs.disable: on
>>> performance.client-io-threads: off
>>> cluster.server-quorum-ratio: 51%
>>>
>>>
>>> huting3
>>> huting3 at corp.netease.com
>>>
>>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?&name=huting3&uid=huting3%40corp.netease.com&ftlId=1&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22huting3%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
>>> 签名由 网易邮箱大师 <http://mail.163.com/dashi/> 定制
>>>
>>>
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180809/7528ade6/attachment-0001.html>


More information about the Gluster-devel mailing list