[Gluster-users] performance in 3.3

Doug Schouten dschoute at sfu.ca
Fri Oct 19 01:44:41 UTC 2012


Hi,

I am noticing a rather slow read performance using GlusterFS 3.3 with 
the following configuration:

Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: server1:/srv/data
Brick2: server2:/srv/data
Brick3: server3:/srv/data
Brick4: server4:/srv/data
Options Reconfigured:
features.quota: off
features.quota-timeout: 1800
performance.flush-behind: on
performance.io-thread-count: 64
performance.quick-read: on
performance.stat-prefetch: on
performance.io-cache: on
performance.write-behind: on
performance.read-ahead: on
performance.write-behind-window-size: 4MB
performance.cache-refresh-timeout: 1
performance.cache-size: 4GB
nfs.rpc-auth-allow: none
network.frame-timeout: 60
nfs.disable: on
performance.cache-max-file-size: 1GB


The servers are connected with bonded 1Gb ethernet, and have LSI 
MegaRAID arrays with 12x1 TB disks in RAID-6 array, using XFS file 
system mounted like:

xfs     logbufs=8,logbsize=32k,noatime,nodiratime  0    0

and we use the FUSE client

localhost:/global /global glusterfs 
defaults,direct-io-mode=enable,log-level=WARNING,log-file=/var/log/gluster.log 
0 0

Our files are all >= 2MB. When rsync-ing we see about 50MB/s read 
performance which improves to 250MB/s after the first copy. This 
indicates to me that the disk caching is working as expected. However I 
am rather surprised by the low 50MB/s read speed; this is too low to be 
limited by network, and the native disk read performance is way better. 
Is there some configuration that can improve this situation?

thanks,


-- 


  Doug Schouten
  Research Associate
  TRIUMF



More information about the Gluster-users mailing list