[Gluster-users] Gluster native mount is really slow compared to nfs

Jo Goossens jo.goossens at hosted-power.com
Tue Jul 11 16:48:34 UTC 2017


PS: I just tested between these 2:

 
mount -t glusterfs -o negative-timeout=1,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
 mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
 So it means only 1 second negative timeout...
 In this particular test: ./smallfile_cli.py  --top /var/www/test --host-set 192.168.140.41 --threads 8 --files 50000 --file-size 64 --record-size 64

  The result is about 4 seconds with the negative timeout of 1 second defined and many many minutes without the negative timeout (I quit after 15 minutes of waiting)
 I will go over to some real world tests now to see how it performs there.
  Regards
Jo

 
 

 
-----Original message-----
From:Jo Goossens <jo.goossens at hosted-power.com>
Sent:Tue 11-07-2017 18:23
Subject:Re: [Gluster-users] Gluster native mount is really slow compared to nfs
To:Vijay Bellur <vbellur at redhat.com>; 
CC:gluster-users at gluster.org; 
 

Hello Vijay,

 
 
What do you mean exactly? What info is missing?

 
PS: I already found out that for this particular test all the difference is made by : negative-timeout=600 , when removing it, it's much much slower again.

 
 
Regards

Jo
 
-----Original message-----
From:Vijay Bellur <vbellur at redhat.com>
Sent:Tue 11-07-2017 18:16
Subject:Re: [Gluster-users] Gluster native mount is really slow compared to nfs
To:Jo Goossens <jo.goossens at hosted-power.com>; 
CC:gluster-users at gluster.org; Joe Julian <joe at julianfamily.org>; 
 


On Tue, Jul 11, 2017 at 11:39 AM, Jo Goossens <jo.goossens at hosted-power.com <mailto:jo.goossens at hosted-power.com> > wrote:
 

Hello Joe,

 
 
I just did a mount like this (added the bold):

 
mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
 Results:

 
root at app1:~/smallfile-master# ./smallfile_cli.py  --top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64 --record-size 64
smallfile version 3.0
                           hosts in test : ['192.168.140.41']
                   top test directory(s) : ['/var/www/test']
                               operation : cleanup
                            files/thread : 5000
                                 threads : 8
           record size (KB, 0 = maximum) : 64
                          file size (KB) : 64
                  file size distribution : fixed
                           files per dir : 100
                            dirs per dir : 10
              threads share directories? : N
                         filename prefix :
                         filename suffix :
             hash file number into dir.? : N
                     fsync after modify? : N
          pause between files (microsec) : 0
                    finish all requests? : Y
                              stonewall? : Y
                 measure response times? : N
                            verify read? : Y
                                verbose? : False
                          log to stderr? : False
                           ext.attr.size : 0
                          ext.attr.count : 0
               permute host directories? : N
                remote program directory : /root/smallfile-master
               network thread sync. dir. : /var/www/test/network_shared
starting all threads by creating starting gate file /var/www/test/network_shared/starting_gate.tmp
host = 192.168.140.41,thr = 00,elapsed = 1.232004,files = 5000,records = 0,status = ok
host = 192.168.140.41,thr = 01,elapsed = 1.148738,files = 5000,records = 0,status = ok
host = 192.168.140.41,thr = 02,elapsed = 1.130913,files = 5000,records = 0,status = ok
host = 192.168.140.41,thr = 03,elapsed = 1.183088,files = 5000,records = 0,status = ok
host = 192.168.140.41,thr = 04,elapsed = 1.220752,files = 5000,records = 0,status = ok
host = 192.168.140.41,thr = 05,elapsed = 1.228039,files = 5000,records = 0,status = ok
host = 192.168.140.41,thr = 06,elapsed = 1.216787,files = 5000,records = 0,status = ok
host = 192.168.140.41,thr = 07,elapsed = 1.229036,files = 5000,records = 0,status = ok
total threads = 8
total files = 40000
100.00% of requested files processed, minimum is  70.00
1.232004 sec elapsed time
32467.428972 files/sec
  
root at app1:~/smallfile-master# ./smallfile_cli.py  --top /var/www/test --host-set 192.168.140.41 --threads 8 --files 50000 --file-size 64 --record-size 64
smallfile version 3.0
                           hosts in test : ['192.168.140.41']
                   top test directory(s) : ['/var/www/test']
                               operation : cleanup
                            files/thread : 50000
                                 threads : 8
           record size (KB, 0 = maximum) : 64
                          file size (KB) : 64
                  file size distribution : fixed
                           files per dir : 100
                            dirs per dir : 10
              threads share directories? : N
                         filename prefix :
                         filename suffix :
             hash file number into dir.? : N
                     fsync after modify? : N
          pause between files (microsec) : 0
                    finish all requests? : Y
                              stonewall? : Y
                 measure response times? : N
                            verify read? : Y
                                verbose? : False
                          log to stderr? : False
                           ext.attr.size : 0
                          ext.attr.count : 0
               permute host directories? : N
                remote program directory : /root/smallfile-master
               network thread sync. dir. : /var/www/test/network_shared
starting all threads by creating starting gate file /var/www/test/network_shared/starting_gate.tmp
host = 192.168.140.41,thr = 00,elapsed = 4.242312,files = 50000,records = 0,status = ok
host = 192.168.140.41,thr = 01,elapsed = 4.250831,files = 50000,records = 0,status = ok
host = 192.168.140.41,thr = 02,elapsed = 3.771269,files = 50000,records = 0,status = ok
host = 192.168.140.41,thr = 03,elapsed = 4.060653,files = 50000,records = 0,status = ok
host = 192.168.140.41,thr = 04,elapsed = 3.880653,files = 50000,records = 0,status = ok
host = 192.168.140.41,thr = 05,elapsed = 3.847107,files = 50000,records = 0,status = ok
host = 192.168.140.41,thr = 06,elapsed = 3.895537,files = 50000,records = 0,status = ok
host = 192.168.140.41,thr = 07,elapsed = 3.966394,files = 50000,records = 0,status = ok
total threads = 8
total files = 400000
100.00% of requested files processed, minimum is  70.00
4.250831 sec elapsed time
94099.245073 files/sec
root at app1:~/smallfile-master#
  
As you can see it's now crazy fast, I think close to or faster than nfs !! What the hell!??!

 
I'm so exited I already post. Any suggestions for those parameters? I will do additional testing over here , because this is ridiculous. That woud mean defaults or no good at all...

 
 Would it be possible to profile the client [1] with defaults and the set of options used now? That could help in understanding the performance delta better.
 Thanks,
Vijay
 [1] https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/#client-side-profiling 
  
 


_______________________________________________
 Gluster-users mailing list
 Gluster-users at gluster.org
 http://lists.gluster.org/mailman/listinfo/gluster-users

 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170711/9da39a60/attachment.html>


More information about the Gluster-users mailing list