[Gluster-users] GlusterFS FUSE Client Performance Issues

Mark Selby mselby at unseelie.name
Fri Feb 26 17:45:02 UTC 2016


Both the client and the server are running Ubuntu 14.04 with GlusterFS 
3.7 from Ubuntu PPA

I am going to use Gluster to create a simple replicated NFS server. I 
was hoping to use the Native FUSE client to also get seamless fail over 
but am running into performance issue that are going to prevent me from 
doing so.

I have replicated Gluster volume on a 24 core server with 128GB RAM, 
10GBe networking and Raid-10 served via ZFS.

 From a remote client I mount the same volume via NFS and the native client.

I did some really basic performance tests just to get a feel for what 
penalty the user space client would incur.

I must admit I was shocked at how "poor" the Gluster FUSE client 
performed. I know that small block sizes are not Glusters favorite but 
even at larger ones the penalty is pretty great.

Is this to be expected or is there some configuration that I am missing?

If providing any more info would be helpful - please let me know.

Thanks!

root at vc1test001 /root 489# mount -t nfs 
dc1strg001x:/zfspool/glusterfs/backups /mnt/backups_nfs
root at vc1test001 /root 490# mount -t glusterfs dc1strg001x:backups 
/mnt/backups_gluster

root at vc1test001 /mnt/backups_nfs 492# dd if=/dev/zero of=testfile bs=16k 
count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 2.6763 s, 100 MB/s

root at vc1test001 /mnt/backups_nfs 510# dd if=/dev/zero of=testfile1 
bs=64k count=16384
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.7434 s, 99.9 MB/s

root at vc1test001 /mnt/backups_nfs 517# dd if=/dev/zero of=testfile1 
bs=128k count=16384
16384+0 records in
16384+0 records out
2147483648 bytes (2.1 GB) copied, 19.0354 s, 113 MB/s

root at vc1test001 /mnt/backups_gluster 495# dd if=/dev/zero of=testfile 
bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 102.058 s, 2.6 MB/s

root at vc1test001 /mnt/backups_gluster 513# dd if=/dev/zero of=testfile1 
bs=64k count=16384
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 114.053 s, 9.4 MB/s

root at vc1test001 /mnt/backups_gluster 514# dd if=/dev/zero of=testfile1 
bs=128k count=16384
16384+0 records in
16384+0 records out
2147483648 bytes (2.1 GB) copied, 123.904 s, 17.3 MB/s

root at vc1test001 /tmp 504# rsync -av --progress testfile1 /mnt/backups_nfs/
sending incremental file list
testfile1
   1,073,741,824 100%   89.49MB/s    0:00:11 (xfr#1, to-chk=0/1)

sent 1,074,004,057 bytes  received 35 bytes  74,069,247.72 bytes/sec
total size is 1,073,741,824  speedup is 1.00

root at vc1test001 /tmp 505# rsync -av --progress testfile1 
/mnt/backups_gluster/
sending incremental file list
testfile1
   1,073,741,824 100%   25.94MB/s    0:00:39 (xfr#1, to-chk=0/1)

sent 1,074,004,057 bytes  received 35 bytes  27,189,977.01 bytes/sec
total size is 1,073,741,824  speedup is 1.00



More information about the Gluster-users mailing list