[Gluster-users] Help diagnosing poor performance on mirrored pair

Arthur Pemberton pemboa at gmail.com
Mon Jul 30 21:58:23 UTC 2018


I have a pair of nodes configured with GlusterFS. I am trying to otimize
the read performance of the setup. with smallfiles_cli.py test I'm getting
4MiB/s on the CREATE and 14MiB/s on the READ

I'm using the FUSE mount, mounting to the local GlustertFS server:
localhost:/gv_wordpress_std_var /var/www/wordpress glusterfs
defaults,_netdev,noauto,x-systemd.automount,noatime,log-level=ERROR 0 0

I'm aware from reading that small files aren't GlusterFS ideal workload,
but these smallfiles results seem very low. Both machines are VMs on a
Rackspace dedicated host.

----

Volume Name: gv_wordpress_std_var
Type: Replicate
Volume ID: c02ce2a4-4953-4228-9abe-8e3a55f8cdfd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rs-wordpress-std-a:/data/glusterfs/bricks/wordpress_std_var-a
Brick2: rs-wordpress-std-b:/data/glusterfs/bricks/wordpress_std_var-b
Options Reconfigured:
client.event-threads: 4
cluster.lookup-optimize: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
performance.cache-size: 1GB
network.inode-lru-limit: 50000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
auth.allow: 172.24.16.*
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
performance.cache-max-file-size: 128MB

----

                                 version : 3.1
                           hosts in test : None
                   top test directory(s) : ['/var/www/wordpress/smf']
                               operation : create
                            files/thread : 10000
                                 threads : 8
           record size (KB, 0 = maximum) : 0
                          file size (KB) : 4
                  file size distribution : fixed
                           files per dir : 100
                            dirs per dir : 10
              threads share directories? : N
                         filename prefix :
                         filename suffix :
             hash file number into dir.? : N
                     fsync after modify? : N
          pause between files (microsec) : 0
             minimum directories per sec : 50
                    finish all requests? : Y
                              stonewall? : Y
                 measure response times? : Y
                            verify read? : Y
                                verbose? : False
                          log to stderr? : False
                           ext.attr.size : 0
                          ext.attr.count : 0
host = wordpress-std-b,thr = 00,elapsed = 77.843555,files = 10000,records =
10000,status = ok
host = wordpress-std-b,thr = 01,elapsed = 77.789082,files = 10000,records =
10000,status = ok
host = wordpress-std-b,thr = 02,elapsed = 77.887903,files = 10000,records =
10000,status = ok
host = wordpress-std-b,thr = 03,elapsed = 77.739701,files = 10000,records =
10000,status = ok
host = wordpress-std-b,thr = 04,elapsed = 77.943002,files = 10000,records =
10000,status = ok
host = wordpress-std-b,thr = 05,elapsed = 77.632973,files = 10000,records =
10000,status = ok
host = wordpress-std-b,thr = 06,elapsed = 77.547384,files = 10000,records =
10000,status = ok
host = wordpress-std-b,thr = 07,elapsed = 77.917453,files = 10000,records =
10000,status = ok
total threads = 8
total files = 80000
total IOPS = 80000
total data =     0.305 GiB
100.00% of requested files processed, minimum is  90.00
elapsed time =    77.943
files/sec = 1026.391055
IOPS = 1026.391055
MiB/sec = 4.009340

----

                                 version : 3.1
                           hosts in test : None
                   top test directory(s) : ['/var/www/wordpress/smf']
                               operation : read
                            files/thread : 10000
                                 threads : 8
           record size (KB, 0 = maximum) : 0
                          file size (KB) : 4
                  file size distribution : fixed
                           files per dir : 100
                            dirs per dir : 10
              threads share directories? : N
                         filename prefix :
                         filename suffix :
             hash file number into dir.? : N
                     fsync after modify? : N
          pause between files (microsec) : 0
             minimum directories per sec : 50
                    finish all requests? : Y
                              stonewall? : Y
                 measure response times? : Y
                            verify read? : Y
                                verbose? : False
                          log to stderr? : False
                           ext.attr.size : 0
                          ext.attr.count : 0
host = wordpress-std-b,thr = 00,elapsed = 20.905861,files = 9900,records =
9900,status = ok
host = wordpress-std-b,thr = 01,elapsed = 20.851949,files = 10000,records =
10000,status = ok
host = wordpress-std-b,thr = 02,elapsed = 20.791474,files = 10000,records =
10000,status = ok
host = wordpress-std-b,thr = 03,elapsed = 20.895812,files = 9900,records =
9900,status = ok
host = wordpress-std-b,thr = 04,elapsed = 20.869731,files = 9900,records =
9900,status = ok
host = wordpress-std-b,thr = 05,elapsed = 20.795042,files = 9900,records =
9900,status = ok
host = wordpress-std-b,thr = 06,elapsed = 20.809913,files = 10000,records =
10000,status = ok
host = wordpress-std-b,thr = 07,elapsed = 20.981664,files = 10000,records =
10000,status = ok
total threads = 8
total files = 79600
total IOPS = 79600
total data =     0.304 GiB
 99.50% of requested files processed, minimum is  90.00
elapsed time =    20.982
files/sec = 3793.788677
IOPS = 3793.788677
MiB/sec = 14.819487
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180730/aa4d842e/attachment.html>


More information about the Gluster-users mailing list