[Gluster-users] AFR performance killer

Marko gluster at sopca.com
Tue Apr 14 13:22:23 UTC 2009


Hi,
i've attached log file. It's a logfile of node where there was 
client+brick configuration. Others were just bricks.

regards

Raghavendra G wrote:
> Hi Marko,
>
> Thanks for the document. Do you have glusterfs log files taken while 
> performing these benchmarks?
>
> regards,
>
> On Tue, Apr 14, 2009 at 3:50 PM, Marko <gluster at sopca.com 
> <mailto:gluster at sopca.com>> wrote:
>
>     Hi,
>     this document is made for my personal reference so it's a little raw.
>
>     regards
>
>
>     Raghavendra G wrote:
>>     Hi Marko,
>>
>>     The option disable-for-first-nbytes disables write behind for the
>>     first n bytes written, where n is the value of the option.
>>
>>     Also, Can you please send the benmark results for the tests you
>>     carried out?
>>
>>     regards,
>>     On Wed, Apr 8, 2009 at 6:18 PM, Marko <gluster at sopca.com
>>     <mailto:gluster at sopca.com>> wrote:
>>
>>         Hello,
>>
>>         To clearify:
>>          * im testing with gluisterfs-2.0.0rc7
>>          * all bricks are on same physical server(Xen guests). It's a
>>         testing environment.
>>
>>         These are a few benchmarks I've done so far:
>>           * time make-many-files #(this is slightly modified version
>>         that I've found
>>         here:http://www.linuxinsight.com/files/make-many-files.c)
>>           * time dd if=/dev/zero bs=8 count=128000 of=file1MB.bin
>>         #(effectively creates lots of small consecutive fops)
>>           * time dd if=/dev/zero bs=4096 count=25000
>>          of=file100MB.bin #(creates optimal transactions from HDDs
>>         physical point of  view. I have best results here with all
>>         configurations)
>>           * time cp -a 0 1 2 /tmp #(/tmp is mounted as tmpfs; 0 1 2
>>         are directories created by "make-many-files" )
>>           * time rm 0 1 2 -fr
>>
>>         I wish GlusterFS team provided simmilar set of tests so one
>>         can measure his performance in a way that can be compared to
>>         results from others. I think it would be a great value to all
>>         GlusterFS users and developers. I think that to create basic
>>         set of these tests is a trivial task( maybe just use mine :D ).
>>
>>         Below I attached my configuration.  Without write-back
>>         translator I get better results in most of the tests.
>>         I can't understand why write-back has such a bad impact on
>>         performance(being a performance *booster*).
>>         I've also noticed that TCP packets are  much lower than MTU
>>         in first benchmark. Meaning write-back doesn't optimize writes.
>>         Can you explain that?
>>         Can someone help me to get high performance with AFR?
>>
>>         Regards,
>>         Marko
>>
>>
>>         #------------- configuration ---------------------
>>         ########## server ###########################
>>         volume posix-brick
>>          type storage/posix
>>          option directory /srv/gluster
>>         end-volume
>>
>>         volume lock-brick
>>          type features/posix-locks
>>          subvolumes posix-brick
>>          option mandatory-locks on
>>         end-volume
>>
>>         volume server
>>               type protocol/server
>>               option transport-type tcp/server
>>               subvolumes lock-brick
>>               option auth.addr.lock-brick.allow *
>>         end-volume
>>
>>
>>
>>         ########## client ###########################
>>
>>         volume brick1
>>         type protocol/client
>>         option transport-type tcp
>>         option remote-host gluster-host1
>>         option remote-subvolume lock-brick
>>         end-volume
>>
>>         volume brick2
>>         type protocol/client
>>         option transport-type tcp
>>         option remote-host gluster-host2
>>         option remote-subvolume lock-brick
>>         end-volume
>>
>>         volume AFR
>>         type cluster/replicate
>>         subvolumes brick1 brick2
>>         end-volume
>>
>>         volume wb
>>          type performance/write-behind
>>          subvolumes AFR
>>          option flush-behind on
>>          option window-size 1MB
>>          option aggregate-size 512KB
>>         end-volume
>>
>>
>>
>>         _______________________________________________
>>         Gluster-users mailing list
>>         Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>         http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>>
>>
>>     -- 
>>     Raghavendra G
>>
>
>
>
>
> -- 
> Raghavendra G
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090414/ff08e96f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: glusterfsd.log.bz2
Type: application/octet-stream
Size: 28419 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090414/ff08e96f/attachment.obj>


More information about the Gluster-users mailing list