[Gluster-users] Gluster native mount is really slow compared to nfs

Vijay Bellur vbellur at redhat.com
Tue Jul 11 16:16:38 UTC 2017


On Tue, Jul 11, 2017 at 11:39 AM, Jo Goossens <jo.goossens at hosted-power.com>
wrote:

> Hello Joe,
>
>
>
>
>
> I just did a mount like this (added the bold):
>
>
> mount -t glusterfs -o
> *attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache*
> ,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log
> 192.168.140.41:/www /var/www
>
>
> Results:
>
>
> root at app1:~/smallfile-master# ./smallfile_cli.py  --top /var/www/test
> --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64
> --record-size 64
> smallfile version 3.0
>                            hosts in test : ['192.168.140.41']
>                    top test directory(s) : ['/var/www/test']
>                                operation : cleanup
>                             files/thread : 5000
>                                  threads : 8
>            record size (KB, 0 = maximum) : 64
>                           file size (KB) : 64
>                   file size distribution : fixed
>                            files per dir : 100
>                             dirs per dir : 10
>               threads share directories? : N
>                          filename prefix :
>                          filename suffix :
>              hash file number into dir.? : N
>                      fsync after modify? : N
>           pause between files (microsec) : 0
>                     finish all requests? : Y
>                               stonewall? : Y
>                  measure response times? : N
>                             verify read? : Y
>                                 verbose? : False
>                           log to stderr? : False
>                            ext.attr.size : 0
>                           ext.attr.count : 0
>                permute host directories? : N
>                 remote program directory : /root/smallfile-master
>                network thread sync. dir. : /var/www/test/network_shared
> starting all threads by creating starting gate file
> /var/www/test/network_shared/starting_gate.tmp
> host = 192.168.140.41,thr = 00,elapsed = 1.232004,files = 5000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 01,elapsed = 1.148738,files = 5000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 02,elapsed = 1.130913,files = 5000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 03,elapsed = 1.183088,files = 5000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 04,elapsed = 1.220752,files = 5000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 05,elapsed = 1.228039,files = 5000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 06,elapsed = 1.216787,files = 5000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 07,elapsed = 1.229036,files = 5000,records =
> 0,status = ok
> total threads = 8
> total files = 40000
> 100.00% of requested files processed, minimum is  70.00
> 1.232004 sec elapsed time
> 32467.428972 files/sec
>
>
>
> root at app1:~/smallfile-master# ./smallfile_cli.py  --top /var/www/test
> --host-set 192.168.140.41 --threads 8 --files 50000 --file-size 64
> --record-size 64
> smallfile version 3.0
>                            hosts in test : ['192.168.140.41']
>                    top test directory(s) : ['/var/www/test']
>                                operation : cleanup
>                             files/thread : 50000
>                                  threads : 8
>            record size (KB, 0 = maximum) : 64
>                           file size (KB) : 64
>                   file size distribution : fixed
>                            files per dir : 100
>                             dirs per dir : 10
>               threads share directories? : N
>                          filename prefix :
>                          filename suffix :
>              hash file number into dir.? : N
>                      fsync after modify? : N
>           pause between files (microsec) : 0
>                     finish all requests? : Y
>                               stonewall? : Y
>                  measure response times? : N
>                             verify read? : Y
>                                 verbose? : False
>                           log to stderr? : False
>                            ext.attr.size : 0
>                           ext.attr.count : 0
>                permute host directories? : N
>                 remote program directory : /root/smallfile-master
>                network thread sync. dir. : /var/www/test/network_shared
> starting all threads by creating starting gate file
> /var/www/test/network_shared/starting_gate.tmp
> host = 192.168.140.41,thr = 00,elapsed = 4.242312,files = 50000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 01,elapsed = 4.250831,files = 50000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 02,elapsed = 3.771269,files = 50000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 03,elapsed = 4.060653,files = 50000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 04,elapsed = 3.880653,files = 50000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 05,elapsed = 3.847107,files = 50000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 06,elapsed = 3.895537,files = 50000,records =
> 0,status = ok
> host = 192.168.140.41,thr = 07,elapsed = 3.966394,files = 50000,records =
> 0,status = ok
> total threads = 8
> total files = 400000
> 100.00% of requested files processed, minimum is  70.00
> 4.250831 sec elapsed time
> 94099.245073 files/sec
> root at app1:~/smallfile-master#
>
>
>
>
> As you can see it's now crazy fast, I think close to or faster than nfs !!
> What the hell!??!
>
>
>
> I'm so exited I already post. Any suggestions for those parameters? I will
> do additional testing over here , because this is ridiculous. That woud
> mean defaults or no good at all...
>
>
>

Would it be possible to profile the client [1] with defaults and the set of
options used now? That could help in understanding the performance delta
better.

Thanks,
Vijay

[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/#client-side-profiling
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170711/6b2202b3/attachment.html>


More information about the Gluster-users mailing list