<div dir="ltr"><div><div>I just tried setting:<br><br>performance.parallel-readdir on<br>features.cache-invalidation on<br>features.cache-invalidation-timeout 600<br>performance.stat-prefetch<br>performance.cache-invalidation<br>performance.md-cache-timeout 600<br>network.inode-lru-limit 50000<br>performance.cache-invalidation on<br><br></div>and clients could not see their files with ls when accessing via a fuse mount. The files and directories were there, however, if you accessed them directly. Server are 3.10.5 and the clients are 3.10 and 3.12.<br><br></div>Any ideas?<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On 10 October 2017 at 10:53, Gandalf Corvotempesta <span dir="ltr"><<a href="mailto:gandalf.corvotempesta@gmail.com" target="_blank">gandalf.corvotempesta@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">2017-10-10 8:25 GMT+02:00 Karan Sandha <span dir="ltr"><<a href="mailto:ksandha@redhat.com" target="_blank">ksandha@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Gandalf,<div><br></div><div>We have multiple tuning to do for small-files which decrease the time for negative lookups , meta-data caching, parallel readdir. Bumping the server and client event threads will help you out in increasing the small file performance. </div><div><br></div><div>gluster v set <vol-name> group metadata-cache</div><div>gluster v set <vol-name> group nl-cache</div><div>gluster v set <vol-name> performance.parallel-readdir on (Note : readdir should be on)</div></div></blockquote><div><br></div><div>This is what i'm getting with suggested parameters.</div><div>I'm running "fio" from a mounted gluster client:</div><div><div>172.16.0.12:/gv0 on /mnt2 type fuse.glusterfs (rw,relatime,user_id=0,group_<wbr>id=0,default_permissions,<wbr>allow_other,max_read=131072)</div></div><div><br></div><div><br></div><div><br></div><div><div># fio --ioengine=libaio --filename=fio.test --size=256M --direct=1 --rw=randrw --refill_buffers --norandommap --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=fio-test</div><div>fio-test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16</div><div>...</div><div>fio-2.16</div><div>Starting 16 processes</div><div>fio-test: Laying out IO file(s) (1 file(s) / 256MB)</div><div>Jobs: 14 (f=13): [m(5),_(1),m(8),f(1),_(1)] [33.9% done] [1000KB/440KB/0KB /s] [125/55/0 iops] [eta 01m:59s]</div><div>fio-test: (groupid=0, jobs=16): err= 0: pid=2051: Tue Oct 10 16:51:46 2017</div><div> read : io=43392KB, bw=733103B/s, iops=89, runt= 60610msec</div><div> slat (usec): min=14, max=1992.5K, avg=177873.67, stdev=382294.06</div><div> clat (usec): min=768, max=6016.8K, avg=1871390.57, stdev=1082220.06</div><div> lat (usec): min=872, max=6630.6K, avg=2049264.23, stdev=1158405.41</div><div> clat percentiles (msec):</div><div> | 1.00th=[ 20], 5.00th=[ 208], 10.00th=[ 457], 20.00th=[ 873],</div><div> | 30.00th=[ 1237], 40.00th=[ 1516], 50.00th=[ 1795], 60.00th=[ 2073],</div><div> | 70.00th=[ 2442], 80.00th=[ 2835], 90.00th=[ 3326], 95.00th=[ 3785],</div><div> | 99.00th=[ 4555], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5800],</div><div> | 99.99th=[ 5997]</div><div> write: io=18856KB, bw=318570B/s, iops=38, runt= 60610msec</div><div> slat (usec): min=17, max=3428, avg=212.62, stdev=287.88</div><div> clat (usec): min=59, max=6015.6K, avg=1693729.12, stdev=1003122.83</div><div> lat (usec): min=79, max=6015.9K, avg=1693941.74, stdev=1003126.51</div><div> clat percentiles (usec):</div><div> | 1.00th=[ 724], 5.00th=[144384], 10.00th=[403456], 20.00th=[765952],</div><div> | 30.00th=[1105920], 40.00th=[1368064], 50.00th=[1630208], 60.00th=[1875968],</div><div> | 70.00th=[2179072], 80.00th=[2572288], 90.00th=[3031040], 95.00th=[3489792],</div><div> | 99.00th=[4227072], 99.50th=[4423680], 99.90th=[4751360], 99.95th=[5210112],</div><div> | 99.99th=[5996544]</div><div> lat (usec) : 100=0.15%, 250=0.05%, 500=0.06%, 750=0.09%, 1000=0.05%</div><div> lat (msec) : 2=0.28%, 4=0.09%, 10=0.15%, 20=0.39%, 50=1.81%</div><div> lat (msec) : 100=1.02%, 250=1.63%, 500=5.59%, 750=6.03%, 1000=7.31%</div><div> lat (msec) : 2000=35.61%, >=2000=39.67%</div><div> cpu : usr=0.01%, sys=0.01%, ctx=8218, majf=11, minf=295</div><div> IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=96.9%, 32=0.0%, >=64=0.0%</div><div> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%</div><div> complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0%</div><div> issued : total=r=5424/w=2357/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0</div><div> latency : target=0, window=0, percentile=100.00%, depth=16</div><div><br></div><div>Run status group 0 (all jobs):</div><div> READ: io=43392KB, aggrb=715KB/s, minb=715KB/s, maxb=715KB/s, mint=60610msec, maxt=60610msec</div><div> WRITE: io=18856KB, aggrb=311KB/s, minb=311KB/s, maxb=311KB/s, mint=60610msec, maxt=60610msec</div></div><div><br></div><div><br></div><div> </div></div></div></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>