[Gluster-devel] xlator.mount.fuse.itable.lru_limit=0 at client fuse process

Yanfei Wang backyes at gmail.com
Wed Oct 17 11:36:54 UTC 2018


Dear, Developers,


Many tuning and benchmark on different gluster release, 3.12.15, 4.1,
3.11, the client fuse process will eat hundreds of GB RAM memory with
256G memory system, then OOM and was killed at some time.

To consult many google search, fuse related paper, benchmark, testing,
 We can not eventually determine what's the reason why memory grows
larger and larger. We make sure,

xlator.mount.fuse.itable.lru_limit=0 at client fuse process could give
us some clues.

I guess, gluster fuse process will cache files inode at  client end,
and never kick old inode eventually.  However, I do not know if this
is design issue, or some tradoff, or bugs.

my configuration:

Options Reconfigured:
performance.write-behind-window-size: 256MB
performance.write-behind: on
cluster.lookup-optimize: on
transport.listen-backlog: 1024
performance.io-thread-count: 6
performance.cache-size: 10GB
performance.quick-read: on
performance.parallel-readdir: on
network.inode-lru-limit: 50000
cluster.quorum-reads: on
cluster.quorum-count: 2
cluster.quorum-type: fixed
cluster.server-quorum-type: server
client.event-threads: 4
performance.stat-prefetch: on
performance.md-cache-timeout: 600
cluster.min-free-disk: 5%
performance.flush-behind: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
cluster.server-quorum-ratio: 51%

We extremely hope some replies from community,  extremely.  Even
telling us the trouble can not be resolved for design reason is GREAT
for us.

Thanks a lot.

- Fei


More information about the Gluster-devel mailing list