[Gluster-users] [OFF] Re: Meta
Jeff Darcy
jdarcy at redhat.com
Mon Jan 21 13:45:34 UTC 2013
On 01/21/2013 07:35 AM, Papp Tamas wrote:
> Well, actually is there a comparison between the two system? Pro/cons, using
> scenarios, stability, real use cases...etc?
There are two that I know of.
http://hekafs.org/index.php/2012/11/trying-out-moosefs/ (that's me)
http://blog.tinola.com/?e=13 (someone I don't know)
TBH I wouldn't read too much into the performance tests in the second. Those
mostly favor GlusterFS, but they're pretty simplistic single-threaded tests
that I don't think reflect even the simplest real-world scenarios. My tests
weren't exactly exhaustive either, but IMNSHO they at least give a better
picture of what to expect when using each system as it was designed to be used.
Still, some of the other author's non-performance points are good.
> To be more ontopic.
> Are there situations, where glusterfs is definitely not recommended? Will be
> there changes in the future?
There are definitely some "sore spots" when it comes to performance.
Synchronous random small-write performance with replication (a common need when
hosting virtual images or databases) has historically been one. If you're
using kvm/qemu you can avoid the FUSE overhead by using the qemu driver, and in
that case I think we're very competitive. Otherwise people with those
workloads might be better off with Ceph. The other big pain point is directory
operations. Again because of FUSE, things like large directory listings or
include/library searches can be pretty painful - though be wary of jumping to
conclusions there, because I've found that even Ceph's kernel-based client
seems to have anomalies in that area too. We're working on some fixes in this
area, but I don't know when they'll reach fruition.
As always, the real answer depends on details. I think we win big on initial
setup and flexibility (built-in feature set and potential to add features
yourself). I will be first to admit that debugging and tuning can be pretty
miserable, but AFAICT that is true for *every* distributed filesystem of the
last twenty years. I'm hoping we can raise the bar on that some day, as we did
for initial setup. Meanwhile, the important thing is to consider one's own
specific needs and evaluate performance in that context. All a general
comparison can really do is tell you which candidates you should test.
More information about the Gluster-users
mailing list