[Gluster-users] AFR performance killer

Marko gluster at sopca.com
Wed Apr 8 14:18:13 UTC 2009


Hello,

To clearify:
  * im testing with gluisterfs-2.0.0rc7
  * all bricks are on same physical server(Xen guests). It's a testing 
environment.

These are a few benchmarks I've done so far:
    * time make-many-files #(this is slightly modified version that I've 
found here:http://www.linuxinsight.com/files/make-many-files.c)
    * time dd if=/dev/zero bs=8 count=128000 of=file1MB.bin 
#(effectively creates lots of small consecutive fops)
    * time dd if=/dev/zero bs=4096 count=25000  of=file100MB.bin 
#(creates optimal transactions from HDDs physical point of  view. I have 
best results here with all configurations)
    * time cp -a 0 1 2 /tmp #(/tmp is mounted as tmpfs; 0 1 2 are 
directories created by "make-many-files" )
    * time rm 0 1 2 -fr

I wish GlusterFS team provided simmilar set of tests so one can measure 
his performance in a way that can be compared to results from others. I 
think it would be a great value to all GlusterFS users and developers. I 
think that to create basic set of these tests is a trivial task( maybe 
just use mine :D ).

Below I attached my configuration.  Without write-back translator I get 
better results in most of the tests.
I can't understand why write-back has such a bad impact on 
performance(being a performance *booster*).
I've also noticed that TCP packets are  much lower than MTU in first 
benchmark. Meaning write-back doesn't optimize writes.
Can you explain that?
Can someone help me to get high performance with AFR?

Regards,
Marko


#------------- configuration ---------------------
########## server ###########################
volume posix-brick
  type storage/posix
  option directory /srv/gluster
end-volume

volume lock-brick
  type features/posix-locks
  subvolumes posix-brick
  option mandatory-locks on
end-volume

volume server
        type protocol/server
        option transport-type tcp/server
        subvolumes lock-brick
        option auth.addr.lock-brick.allow *
end-volume



########## client ###########################

volume brick1
 type protocol/client
 option transport-type tcp
 option remote-host gluster-host1
 option remote-subvolume lock-brick
end-volume

volume brick2
 type protocol/client
 option transport-type tcp
 option remote-host gluster-host2
 option remote-subvolume lock-brick
end-volume

volume AFR
 type cluster/replicate
 subvolumes brick1 brick2
end-volume

volume wb
  type performance/write-behind
  subvolumes AFR
  option flush-behind on
  option window-size 1MB
  option aggregate-size 512KB
end-volume





More information about the Gluster-users mailing list