[Gluster-users] GlusterFS performance

Torbjørn Thorsen torbjorn at trollweb.no
Tue Mar 12 13:49:09 UTC 2013


That is the same transfer rate I'm seeing using O_SYNC writes or using
a Gluster-backed file as loop device for a Xen instance.

I just did a quick "benchmark" (I just installed and tested, no
tuning) between Gluster + loop device, Ceph-RBD and NFS, and saw
pretty much that same transfer speed for all three technologies.
A bit faster over NFS, about 20%, but that comparison might not be
fair considering the features of Gluster and Ceph.

Writes that are easier to buffer, such as straight cp are able to top
out both NIC's,
giving writes speeds at around 100MB/s.
Using dd with conv=sync or running sync, I blocking, presumably
waiting for the buffers to flush.

I'm not at all sure if these numbers reflect the potential performance
of the hardware or software, but the numbers seem consistent and maybe
not all that unreasonable.

On Tue, Mar 12, 2013 at 9:56 AM, Nikita A Kardashin
<differentlocal at gmail.com> wrote:
> Hello,
>
> I found other strange thing.
>
> On the dd-test (dd if=/dev/zero of=2testbin bs=1M count=1024 oflag=direct)
> my volume shows only 18-19MB/s.
> Full network speed is 90-110MB/s, storage speed - ~200MB/s.
>
> Volume type - replicated-distributed, 2 replicas, 4 nodes. Volumes mounted
> via fuse with direct-io=enable option.
>
> Its sooo slooow, right?
>
>
> 2013/3/5 harry mangalam <harry.mangalam at uci.edu>
>>
>> This kind of info is surprisingly hard to obtain.  The gluster docs do
>> contain
>> some of it, ie:
>>
>> <http://community.gluster.org/a/linux-kernel-tuning-for-glusterfs/>
>>
>> I also found well-described kernel tuning parameters in the FHGFS wiki (as
>> another distibuted fs, they share some characteristics)
>>
>> http://www.fhgfs.com/wiki/wikka.php?wakka=StorageServerTuning
>>
>> and more XFS tuning filesystem params here:
>>
>> <http://www.mythtv.org/wiki/Optimizing_Performance#Further_Information>
>>
>> and here:
>> <http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-
>> edition>
>>
>> But of course, YMMV and a number of these parameters conflict and/or have
>> serious tradeoffs, as you discovered.
>>
>> LSI recently loaned me a Nytro SAS controller (on-card SSD-cached) which
>> seems
>> pretty phenomenal on a single brick (and is predicted to perform well
>> based on
>> their profiling), but am waiting for another node to arrive before I can
>> test
>> it under true gluster conditions.  Anyone else tried this hardware?
>>
>> hjm
>>
>> On Tuesday, March 05, 2013 12:34:41 PM Nikita A Kardashin wrote:
>> > Hello all!
>> >
>> > This problem is solved by me today.
>> > Root of all in the incompatibility of gluster cache and kvm cache.
>> >
>> > Bug reproduces if KVM virtual machine created with cache=writethrough
>> > (default for OpenStack) option and hosted on GlusterFS volume. If any
>> > other
>> > (cache=writeback or cache=none with direct-io) cacher used, performance
>> > of
>> > writing to existing file inside VM is equal to bare storage (from host
>> > machine) write performance.
>> >
>> > I think, it must be documented in Gluster and maybe filled a bug.
>> >
>> > Other question. Where I can read something about gluster tuning (optimal
>> > cache size, write-behind, flush-behind use cases and other)? I found
>> > only
>> > options list, without any how-to or tested cases.
>> >
>> >
>> > 2013/3/5 Toby Corkindale <toby.corkindale at strategicdata.com.au>
>> >
>> > > On 01/03/13 21:12, Brian Candler wrote:
>> > >> On Fri, Mar 01, 2013 at 03:30:07PM +0600, Nikita A Kardashin wrote:
>> > >>>     If I try to execute above command inside virtual machine (KVM),
>> > >>>     first
>> > >>>     time all going right - about 900MB/s (cache effect, I think),
>> > >>> but if
>> > >>>
>> > >>> I
>> > >>>
>> > >>>     run this test again on existing file - task (dd) hungs up and
>> > >>> can be
>> > >>>     stopped only by Ctrl+C.
>> > >>>     Overall virtual system latency is poor too. For example, apt-get
>> > >>>     upgrade upgrading system very, very slow, freezing on "Unpacking
>> > >>>     replacement" and other io-related steps.
>> > >>>     Does glusterfs have any tuning options, that can help me?
>> > >>
>> > >> If you are finding that processes hang or freeze indefinitely, this
>> > >> is
>> > >> not
>> > >> a question of "tuning", this is simply "broken".
>> > >>
>> > >> Anyway, you're asking the wrong person - I'm currently in the process
>> > >> of
>> > >> stripping out glusterfs, although I remain interested in the project.
>> > >>
>> > >> I did find that KVM performed very poorly, but KVM was not my main
>> > >> application and that's not why I'm abandoning it.  I'm stripping out
>> > >> glusterfs primarily because it's not supportable in my environment,
>> > >> because
>> > >> there is no documentation on how to analyse and recover from failure
>> > >> scenarios which can and do happen. This point in more detail:
>> > >> http://www.gluster.org/**pipermail/gluster-users/2013-**
>> > >>
>> > >> January/035118.html<http://www.gluster.org/pipermail/gluster-users/2013-J
>> > >> anuary/035118.html>
>> > >>
>> > >> The other downside of gluster was its lack of flexibility, in
>> > >> particular
>> > >> the
>> > >> fact that there is no usage scaling factor on bricks, so that even
>> > >> with a
>> > >> simple distributed setup all your bricks have to be the same size.
>> > >> Also,
>> > >> the object store feature which I wanted to use, has clearly had
>> > >> hardly
>> > >> any
>> > >> testing (even the RPM packages don't install properly).
>> > >>
>> > >> I *really* wanted to deploy gluster, because in principle I like the
>> > >> idea
>> > >> of
>> > >> a virtual distribution/replication system which sits on top of
>> > >> existing
>> > >> local filesystems.  But for storage, I need something where
>> > >> operational
>> > >> supportability is at the top of the pile.
>> > >
>> > > I have to agree; GlusterFS has been in use here in production for a
>> > > while,
>> > > and while it mostly works, it's been fragile and documentation has
>> > > been
>> > > disappointing. Despite 3.3 being in beta for a year, it still seems to
>> > > have
>> > > been poorly tested. For eg, I can't believe almost no-one else noticed
>> > > that
>> > > the log files were busted.. nor that the bug report has been around
>> > > for
>> > > quarter of a year without being responded to or fixed.
>> > >
>> > > I have to ask -- what are you moving to now, Brian?
>> > >
>> > > -Toby
>> > >
>> > >
>> > > ______________________________**_________________
>> > > Gluster-users mailing list
>> > > Gluster-users at gluster.org
>> > >
>> > > http://supercolony.gluster.**org/mailman/listinfo/gluster-**users<http://s
>> > > upercolony.gluster.org/mailman/listinfo/gluster-users>
>>
>> ---
>> Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
>> [m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
>> 415 South Circle View Dr, Irvine, CA, 92697 [shipping]
>> MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
>> ---
>> "Something must be done. [X] is something. Therefore, we must do it."
>> Bruce Schneier, on American response to just about anything.
>
>
>
>
> --
> With best regards,
> differentlocal (www.differentlocal.ru | differentlocal at gmail.com),
> System administrator.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



--
Vennlig hilsen
Torbjørn Thorsen
Utvikler / driftstekniker

Trollweb Solutions AS
- Professional Magento Partner
www.trollweb.no

Telefon dagtid: +47 51215300
Telefon kveld/helg: For kunder med Serviceavtale

Besøksadresse: Luramyrveien 40, 4313 Sandnes
Postadresse: Maurholen 57, 4316 Sandnes

Husk at alle våre standard-vilkår alltid er gjeldende



More information about the Gluster-users mailing list