[Gluster-users] Configuration suggestions (aka poor/slow performance on new hardware)

Raghavendra G raghavendra at gluster.com
Wed Mar 31 04:06:56 UTC 2010


Hi Ed,

On Mon, Mar 29, 2010 at 3:38 PM, Ed W <lists at wildgooses.com> wrote:

> On 26/03/2010 18:22, Ramiro Magallanes wrote:
>
>>
>> You coud run the genfiles script simultaneuosly (my english is really
>> poor, we can change the subject of this mail for something like "poor
>> performance and poor english" xDDD) but its not like a thread aplication
>> (iozone rulez).
>>
>> If I run 3 process of genfiles.sh i get 440, 441, and 450 files. (1300
>> files aprox.) but if you add some more procees you're not going to
>> obtain any big number :)
>>
>> With 6 genfiles at the same time i have :
>>
>> PID 12832 : 249 files created in 60 seconds.
>> PID 12830 : 249 files created in 60 seconds.
>> PID 12829 : 248 files created in 60 seconds.
>> PID 12827 : 262 files created in 60 seconds.
>> PID 12828 : 252 files created in 60 seconds.
>> PID 12831 : 255 files created in 60 seconds.
>>
>> 1515 files .
>>
>>
>
> Just speaking theoretically, but I believe that without a writebehind cache
> on the client side then gluster is required to effectively "sync" after each
> file operation (well it's probably only half a sync, but some variation of
> this problem), this is safe, but of course decreases writespeed to be
> something which is a function of the network latency.
>
> So in your case if you had say around 1ms of latency then you would be
> limited to around 1,000 operations per second simply due to the wait until
> the far side acks the operation.  This seems to correlate with the figures
> you are seeing (show your ping speed and correlate it with IOs per sec?)
>
> I don't see this as a gluster issue - it's a fundamental limitation of
> whether you want an ack for network based operations?  Many people switch to
> fiberchannel or similar for the io for exactly this reason.  If you can drop
> the latency by a factor of 10 then you are increasing your IOs by a factor
> of 10.
>
> Untested, but at least theoretically switching on writeback caching on the
> client should mean that it ploughs on without waiting for network latency to
> give you your ack.  Lots of potential issues, but if this is ok for your
> requirements then give it a try?
>
>
> Note, just an idea for the gluster guys, but I think I saw in AFS (or was
> it something else?) a kind of hybrid server side writeback cache.  The idea
> was that the server could ack the write if a certain number of storage nodes
> at least had the pending IO in memory, even if it hadn't hit the disk yet.
>  This is subtly different to server side writeback, but seems like a very
> neat idea.  Note it's probably not relevant to small file creation tests
> like above, but for other situations
>

Current design of write-behind acknowledges writes (to applications) even
when they've not hit the disk. Can you please explain how this design is
different (if it is different) from the idea you've explained above?


> I do think some of the benchmarks here might not be really addressing
> network latency as the limiting bottleneck?
>
> Good luck
>
> Ed W
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



-- 
Raghavendra G


More information about the Gluster-users mailing list