[Gluster-users] Some benchmarks for anyone that's interested..

Anand Avati anand.avati at gmail.com
Thu May 10 19:24:22 UTC 2012

On Thu, May 10, 2012 at 1:18 AM, lejeczek <peljasz at yahoo.co.uk> wrote:

>  glusterfs is a distributed file system, fair enough, easy to maintain
> and very friendly to the user
> still, comparing it against a raw (local) file system, like I do via local
> mount point back ended with a single brick volume would be a valid route to
> see what glusterfs does when most of the variables are out of the equation.
> I mean a basic logic one would follow is, unless a volume is a smartly
> distributed it would slow down even more (with some formula) as soon as
> other media get involved
> thus I believe for simpler scenarios glusterfs won't do, for instance one
> would like to run a live replica of a storage,
> a glusterfs two bricks replicated vol VS even only bidirectional lsyncd
> lsyncd wins by miles, even for very deep data trees with lots of files
> all may appreciate great bonus of clear and easy maintenance gluster
> offers (yet still no AFR-like setups with command utils possible) which is
> important for more complex configurations, for simpler ones this bonus does
> not outweigh poor performance gluster suffers from, well, in my opinion.
What you end up actually comparing when you compare a local disk access
with a loopback gluster on the same machine is the costant, high,
per-syscall latency overhead of FUSE. Of course the comparison is going to
be obscured badly. But this constant high latency becomes less of a problem
the moment both the "participants" get a network round trip included in
their operation.

When you are comparing gluster v/s lsyncd you are essentially comparing
synchronous v/s asynchronous replication. They are very different
techniques with very different applicability/expectations based on your use

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120510/0b195702/attachment.html>

More information about the Gluster-users mailing list