[Gluster-users] GlusterFS performance
Steve Thompson
smt at cbe.cornell.edu
Wed Sep 26 21:57:16 UTC 2012
On Wed, 26 Sep 2012, Joe Landman wrote:
> Read performance with the gluster client isn't that good, write performance
> (effectively write caching at the brick layer) is pretty good.
Yep. I found out today that if I set up a 2-brick distributed
non-replicated volume using two servers, GlusterFS read performance is
good from the server that does _not_ contain a copy of the file. In fact,
I got 148 MB/sec, largely due to the two servers having dual-bonded
gigabit links (balance-alb mode) to each other via a common switch. From
the server that _does_ have a copy of the file, of course read performance
is excellent (over 580 MB/sec).
It remains that read performance on another client (same subnet but an
extra switch hop) is too low to be useable, and I can point the finger at
GlusterFS here since NFS on the same client gets good performance, as does
MooseFS (although MooseFS has other issues). And if using a replicated
volume, GlusterFS write performance is too low to be useable also.
> I know its a generalization, but this is basically what we see. In the
> best case scenario, we can tune it pretty hard to get within 50% of
> native speed. But it takes lots of work to get it to that point, as well
> as an application which streams large IO. Small IO is a (still) bad on
> the system IMO.
I'm very new to GlusterFS, so it looks like I have my work cut out for me.
My aim was ultimately to build a large (100 TB) file system with
redundancy for linux home directories and samba shares. I've already given
up on MooseFS after several months' work.
Thanks for all your comments,
Steve
More information about the Gluster-users
mailing list