[Gluster-users] why i have a good write iops and poor read iops with more stripes?

肖力 exiaoli at 163.com
Fri Nov 9 02:15:33 UTC 2012


I need  high random read and write iops in use   small files , it means high 4k random read and write iops,not big files read and write high speed.
I want to find a way use more pc servers ,more  harddisks , more  nics to improve performance.
I think there is many way to choose :
raid or no raid? 
Is more stripe  more  performance ?
How to balance data safe and  performance ? 
I test 2 weeks,  choose diffenrent servers ,stripe group , raid and no raid ...
be honest , I have not find a goog way  , i get stuck ,can you offer me some suggestions ? tks

 






在 2012-11-09 06:58:17,"Brian Foster" <bfoster at redhat.com> 写道:
>On 11/07/2012 09:06 PM, 肖力 wrote:
>> Hi ,I have 4 dell 2970 server , three server  harddisk is 146Gx6 ,one hard disk is 72Gx6:
>> 
>> I Want to test different stripe iops, and i test result is :
>> 
>> ----------------------------------------------------------------------
>> 
>> no stripe 
>> 
>> gluster volume create test-volume   transport tcp \
>> 172.16.20.231:/exp2  \
>> 172.16.20.232:/exp2 \
>> 172.16.20.233:/exp2  \
>> 
>> 4k 100 % random write  288
>> 
>> 4k 100 % random read  264
>> 
>> ----------------------------------------------------------------------
>> 
>> 2 stripe
>> 
>> gluster volume create test-volume   transport tcp \
>> 172.16.20.231:/exp2  \
>> 172.16.20.232:/exp2 \
>> 172.16.20.233:/exp2  \
>> 
>
>This looks the same as the "no stripe" case. Given your numbers differ,
>I presume a copy/paste error?
i check my record ,it is correct , I can test it again if i have time.
>
>> 4k 100 % random write  439
>> 
>> 4k 100 % random read  241
>> 
>> 
>> ----------------------------------------------------------------------
>> 
>> 6 stripe
>> 
>> gluster volume create test-volume3 stripe 6 transport tcp \
>> 172.16.20.231:/exp5 172.16.20.231:/exp6 \
>> 172.16.20.232:/exp5 172.16.20.232:/exp6 \
>> 172.16.20.233:/exp5 172.16.20.233:/exp6 \
>> 
>
>Are you adding bricks on the same spindle(s) here? For example, are
>172.16.20.231:/exp{5,6} on the same set of drives? If so, the extra
>bricks might not be buying you anything.
>
That's not clear , i use default cmd ,i don't know how gulster control it.

>> 4k 100 % random write  683
>> 
>> 4k 100 % random read  151
>> 
>> ----------------------------------------------------------------------
>> 
>> My question is why more stripes more write good iops and more poor read iops?
>> 
>
>I don't have a specific answer, but I ran a few similar random read
>tests to remind myself of the behavior here. One thing our performance
>guys have pointed out is that you'll want to run multiple threads
>against a gluster native client to maximize iops. Perhaps you're doing
>that, but you haven't provided much detail about the actual test you're
>running.
>
>Along with that, there is a known bottleneck in the client that is
>alleviated by using the 'gid-timeout' mount option (i.e., -o
>gid-timeout=1').
Ok, i test gid-timeout=1  thks


>
>Brian
>
>> thks
>> 
>> xiao li
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121109/9723f7d6/attachment.html>


More information about the Gluster-users mailing list