[Gluster-users] GlusterFS 3.4 Fuse client Performace
Jung Young Seok
jung.youngseok at gmail.com
Fri Oct 25 09:01:28 UTC 2013
Dear GlusterFS Engineer,
I have questions that my glusterfs server and fuse client
perform properly on below specification.
It can write only *65MB*/s through FUSE client to 1 glusterfs server (1
brick and no replica for 1 volume )
- NW bandwidth are enough for now. I've check it with iftop
- However it can write *120MB*/s when I mount nfs on the same volume.
Could anyone check if the glusterfs and fuse client perform properly?
Detail explanations are below.
=======================================================================
I've set 4 glusterfs servers and 1 fuse client.
Each spec is as followings.
*Server x 4*
- CPU : Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz (2 cpu * 4 core)
- Memory : 32GB
- HDD (3TB 7.2K RPM SATA x 14 )
* RAID6(33T)
* XFS
- OS : RHS 2.1
- 4 Gluster Server will be used 2 replica x 2 distributed as 1 volume
- NW 1G for replica
- NW 1G for Storage and management
- Current active profile: rhs-high-throughput
*FUSE Client (gluster 3.4)*
- CPU : Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz
- Memory : 32GB
- OS : CentOS6.4
- NW 2G for Storage (NIC bonding)
All server will be in 10G network. (for now 1G network)
I've tested to check primitive disk performance.
- on first glusterfs server
* it can write 870MB/s (dd if=/dev/zero of=./dummy bs=4096 count=10000)
* it can read 1GB/s (cat test_file.23 > /dev/null )
- on fuse client (mount volume : 1 brick(1dist, no-replica)
* it can write 64.8MB/s
- on nfs client (mount volume : 1 brick(1dist, no-replica)
* it can write 120MB/s (it reached NW bandwith
I wonder why fuse client much slower than nfs client. (it's no-replica peer)
Is it normal performance?
=========================================================================
Thanks in advance
Youngseok Jung
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131025/834d779e/attachment.html>
More information about the Gluster-users
mailing list