[Gluster-users] GlusterFS performance

Nikita A Kardashin differentlocal at gmail.com
Tue Mar 5 06:34:41 UTC 2013


Hello all!

This problem is solved by me today.
Root of all in the incompatibility of gluster cache and kvm cache.

Bug reproduces if KVM virtual machine created with cache=writethrough
(default for OpenStack) option and hosted on GlusterFS volume. If any other
(cache=writeback or cache=none with direct-io) cacher used, performance of
writing to existing file inside VM is equal to bare storage (from host
machine) write performance.

I think, it must be documented in Gluster and maybe filled a bug.

Other question. Where I can read something about gluster tuning (optimal
cache size, write-behind, flush-behind use cases and other)? I found only
options list, without any how-to or tested cases.


2013/3/5 Toby Corkindale <toby.corkindale at strategicdata.com.au>

> On 01/03/13 21:12, Brian Candler wrote:
>
>> On Fri, Mar 01, 2013 at 03:30:07PM +0600, Nikita A Kardashin wrote:
>>
>>>     If I try to execute above command inside virtual machine (KVM), first
>>>     time all going right - about 900MB/s (cache effect, I think), but if
>>> I
>>>     run this test again on existing file - task (dd) hungs up and can be
>>>     stopped only by Ctrl+C.
>>>     Overall virtual system latency is poor too. For example, apt-get
>>>     upgrade upgrading system very, very slow, freezing on "Unpacking
>>>     replacement" and other io-related steps.
>>>     Does glusterfs have any tuning options, that can help me?
>>>
>>
>> If you are finding that processes hang or freeze indefinitely, this is not
>> a question of "tuning", this is simply "broken".
>>
>> Anyway, you're asking the wrong person - I'm currently in the process of
>> stripping out glusterfs, although I remain interested in the project.
>>
>> I did find that KVM performed very poorly, but KVM was not my main
>> application and that's not why I'm abandoning it.  I'm stripping out
>> glusterfs primarily because it's not supportable in my environment,
>> because
>> there is no documentation on how to analyse and recover from failure
>> scenarios which can and do happen. This point in more detail:
>> http://www.gluster.org/**pipermail/gluster-users/2013-**
>> January/035118.html<http://www.gluster.org/pipermail/gluster-users/2013-January/035118.html>
>>
>> The other downside of gluster was its lack of flexibility, in particular
>> the
>> fact that there is no usage scaling factor on bricks, so that even with a
>> simple distributed setup all your bricks have to be the same size.  Also,
>> the object store feature which I wanted to use, has clearly had hardly any
>> testing (even the RPM packages don't install properly).
>>
>> I *really* wanted to deploy gluster, because in principle I like the idea
>> of
>> a virtual distribution/replication system which sits on top of existing
>> local filesystems.  But for storage, I need something where operational
>> supportability is at the top of the pile.
>>
>
> I have to agree; GlusterFS has been in use here in production for a while,
> and while it mostly works, it's been fragile and documentation has been
> disappointing. Despite 3.3 being in beta for a year, it still seems to have
> been poorly tested. For eg, I can't believe almost no-one else noticed that
> the log files were busted.. nor that the bug report has been around for
> quarter of a year without being responded to or fixed.
>
> I have to ask -- what are you moving to now, Brian?
>
> -Toby
>
>
> ______________________________**_________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.**org/mailman/listinfo/gluster-**users<http://supercolony.gluster.org/mailman/listinfo/gluster-users>
>



-- 
With best regards,
differentlocal (www.differentlocal.ru | differentlocal at gmail.com),
System administrator.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130305/b5b5f19d/attachment.html>


More information about the Gluster-users mailing list