[Gluster-users] small files performance

Alastair Neil ajneil.tech at gmail.com
Tue Oct 10 21:59:10 UTC 2017


I just tried setting:

performance.parallel-readdir on
features.cache-invalidation on
features.cache-invalidation-timeout 600
performance.stat-prefetch
performance.cache-invalidation
performance.md-cache-timeout 600
network.inode-lru-limit 50000
performance.cache-invalidation on

and clients could not see their files with ls when accessing via a fuse
mount.  The files and directories were there, however, if you accessed them
directly. Server are 3.10.5 and the clients are 3.10 and 3.12.

Any ideas?


On 10 October 2017 at 10:53, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:

> 2017-10-10 8:25 GMT+02:00 Karan Sandha <ksandha at redhat.com>:
>
>> Hi Gandalf,
>>
>> We have multiple tuning to do for small-files which decrease the time for
>> negative lookups , meta-data caching, parallel readdir. Bumping the server
>> and client event threads will help you out in increasing the small file
>> performance.
>>
>> gluster v set <vol-name>  group metadata-cache
>> gluster v set <vol-name> group nl-cache
>> gluster v set <vol-name> performance.parallel-readdir on (Note : readdir
>> should be on)
>>
>
> This is what i'm getting with suggested parameters.
> I'm running "fio" from a mounted gluster client:
> 172.16.0.12:/gv0 on /mnt2 type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,
> allow_other,max_read=131072)
>
>
>
> # fio --ioengine=libaio     --filename=fio.test     --size=256M
> --direct=1     --rw=randrw     --refill_buffers     --norandommap
> --bs=8k     --rwmixread=70     --iodepth=16     --numjobs=16
> --runtime=60     --group_reporting     --name=fio-test
> fio-test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio,
> iodepth=16
> ...
> fio-2.16
> Starting 16 processes
> fio-test: Laying out IO file(s) (1 file(s) / 256MB)
> Jobs: 14 (f=13): [m(5),_(1),m(8),f(1),_(1)] [33.9% done] [1000KB/440KB/0KB
> /s] [125/55/0 iops] [eta 01m:59s]
> fio-test: (groupid=0, jobs=16): err= 0: pid=2051: Tue Oct 10 16:51:46 2017
>   read : io=43392KB, bw=733103B/s, iops=89, runt= 60610msec
>     slat (usec): min=14, max=1992.5K, avg=177873.67, stdev=382294.06
>     clat (usec): min=768, max=6016.8K, avg=1871390.57, stdev=1082220.06
>      lat (usec): min=872, max=6630.6K, avg=2049264.23, stdev=1158405.41
>     clat percentiles (msec):
>      |  1.00th=[   20],  5.00th=[  208], 10.00th=[  457], 20.00th=[  873],
>      | 30.00th=[ 1237], 40.00th=[ 1516], 50.00th=[ 1795], 60.00th=[ 2073],
>      | 70.00th=[ 2442], 80.00th=[ 2835], 90.00th=[ 3326], 95.00th=[ 3785],
>      | 99.00th=[ 4555], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5800],
>      | 99.99th=[ 5997]
>   write: io=18856KB, bw=318570B/s, iops=38, runt= 60610msec
>     slat (usec): min=17, max=3428, avg=212.62, stdev=287.88
>     clat (usec): min=59, max=6015.6K, avg=1693729.12, stdev=1003122.83
>      lat (usec): min=79, max=6015.9K, avg=1693941.74, stdev=1003126.51
>     clat percentiles (usec):
>      |  1.00th=[  724],  5.00th=[144384], 10.00th=[403456],
> 20.00th=[765952],
>      | 30.00th=[1105920], 40.00th=[1368064], 50.00th=[1630208],
> 60.00th=[1875968],
>      | 70.00th=[2179072], 80.00th=[2572288], 90.00th=[3031040],
> 95.00th=[3489792],
>      | 99.00th=[4227072], 99.50th=[4423680], 99.90th=[4751360],
> 99.95th=[5210112],
>      | 99.99th=[5996544]
>     lat (usec) : 100=0.15%, 250=0.05%, 500=0.06%, 750=0.09%, 1000=0.05%
>     lat (msec) : 2=0.28%, 4=0.09%, 10=0.15%, 20=0.39%, 50=1.81%
>     lat (msec) : 100=1.02%, 250=1.63%, 500=5.59%, 750=6.03%, 1000=7.31%
>     lat (msec) : 2000=35.61%, >=2000=39.67%
>   cpu          : usr=0.01%, sys=0.01%, ctx=8218, majf=11, minf=295
>   IO depths    : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=96.9%, 32=0.0%,
> >=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>      complete  : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>      issued    : total=r=5424/w=2357/d=0, short=r=0/w=0/d=0,
> drop=r=0/w=0/d=0
>      latency   : target=0, window=0, percentile=100.00%, depth=16
>
> Run status group 0 (all jobs):
>    READ: io=43392KB, aggrb=715KB/s, minb=715KB/s, maxb=715KB/s,
> mint=60610msec, maxt=60610msec
>   WRITE: io=18856KB, aggrb=311KB/s, minb=311KB/s, maxb=311KB/s,
> mint=60610msec, maxt=60610msec
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171010/7be61156/attachment.html>


More information about the Gluster-users mailing list