[Gluster-users] GlusterFS performance
Steve Thompson
smt at cbe.cornell.edu
Tue Sep 25 21:35:08 UTC 2012
GlusterFS newbie (less than a week) here. Running GlusterFS 3.2.6 servers
on Dell PE2900 systems with four 3.16 GHz Xeon cores and 16 GB memory
under CentOS 5.8.
For this test, I have a distributed volume of one brick only, so no
replication. I have made performance measurements with both dd and
Bonnie++, and they confirm each other; here I report only the dd numbers
(using bs=1024k). File size is 1 TB. The brick is a RAID5 set of six 1 TB
SATA drives with RAID done in the Perc controller; file system is ext4.
On the server:
* using dd to write to the GlusterFS volume (w/fuse): 581 MB/sec.
* using dd to read from the volume (of=/dev/null): 607 MB/sec.
On a fairly low-spec client system (CentOS 6.3, Pentium 4, 3.0 GHz), I
get:
* dd write to gfs: 99 MB/sec.
* dd read from gfs: 15 MB/sec.
Note that the write performance is good and the read performance is very
low.
Using NFS to read from the same server (using the Kernel NFS server) gives
80 MB/sec, and iperf tells me 117 MB/sec, so I don't believe that there is
anything fundamentally wrong with the network. Using MooseFS on the same
hardware gives me a read and write performances very close to the NFS
values.
Using a distributed volume of 2 bricks (no replication): write 45 MB/sec,
read 13 MB/sec.
Using a replicated volume of 2 bricks: write 23 MB/sec, read 13 MB/sec.
I understand why writing to a replicated volume loses 50% of the
performance, but I don't understand (1) why the read performance is always
so low, even with a single brick, and (2) why writing to a 2-brick
distributed non-replicated volume is only half the performance of a
1-brick volume.
Someone give me a clue, please.
Steve
More information about the Gluster-users
mailing list