<div dir="ltr"><br><div class="gmail_quote"><div dir="ltr">On Thu, Oct 18, 2018 at 2:30 PM Yanfei Wang <<a href="mailto:backyes@gmail.com">backyes@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear, Developers,<br>
<br>
<br>
Many tuning and benchmark on different gluster release, 3.12.15, 4.1,<br>
3.11, the client fuse process will eat hundreds of GB RAM memory with<br>
256G memory system, then OOM and was killed at some time.<br>
<br>
To consult many google search, fuse related paper, benchmark, testing,<br>
We can not eventually determine what's the reason why memory grows<br>
larger and larger. We make sure,<br>
<br>
xlator.mount.fuse.itable.lru_limit=0 at client fuse process could give<br>
us some clues.<br>
<br></blockquote><div><br></div><div>There is no 'lru_limit' implemented on client side as of now! We are trying to get that feature done for glusterfs-6. Try pruning inode table by forcing forgets (by dropping cache)</div><div><br></div><div><span style="color:rgb(36,39,41);font-family:Consolas,Menlo,Monaco,"Lucida Console","Liberation Mono","DejaVu Sans Mono","Bitstream Vera Sans Mono","Courier New",monospace,sans-serif;font-size:13px;background-color:rgb(239,240,241)">echo 3 | sudo tee /proc/sys/vm/drop_caches</span><br></div><div><br></div><div>Meantime, some questions on workload, are you having 100s of millions of files? Or is it lesser files with bigger size?</div><div><br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I guess, gluster fuse process will cache files inode at client end,<br>
and never kick old inode eventually. However, I do not know if this<br>
is design issue, or some tradoff, or bugs.<br>
<br>
my configuration:<br>
<br>
Options Reconfigured:<br>
performance.write-behind-window-size: 256MB<br>
performance.write-behind: on<br>
cluster.lookup-optimize: on<br>
transport.listen-backlog: 1024<br>
performance.io-thread-count: 6<br>
performance.cache-size: 10GB<br>
performance.quick-read: on<br>
performance.parallel-readdir: on<br>
network.inode-lru-limit: 50000<br>
cluster.quorum-reads: on<br>
cluster.quorum-count: 2<br>
cluster.quorum-type: fixed<br>
cluster.server-quorum-type: server<br>
client.event-threads: 4<br>
performance.stat-prefetch: on<br>
performance.md-cache-timeout: 600<br>
cluster.min-free-disk: 5%<br>
performance.flush-behind: on<br>
transport.address-family: inet<br>
nfs.disable: on<br>
performance.client-io-threads: on<br>
cluster.server-quorum-ratio: 51%<br>
<br>
We extremely hope some replies from community, extremely. Even<br>
telling us the trouble can not be resolved for design reason is GREAT<br>
for us.<br>
<br>
Thanks a lot.<br>
<br>
- Fei<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Amar Tumballi (amarts)<br></div></div></div></div></div></div>