[Gluster-users] Configuration suggestions (aka poor/slow performance on new hardware)

Stephan von Krawczynski skraw at ithnet.com
Fri Mar 26 17:17:22 UTC 2010


Can you check how things look like when using ext3 instead of xfs?

On Fri, 26 Mar 2010 18:04:07 +0100
Ramiro Magallanes <listas at sabueso.org> wrote:

> 	Hello there!
> 
> Im working on a 6-nodes cluster, with SuperMicro new hardware.
> The cluster have to store a millons of JPG's about (200k-4MB),and little
> text files.
> 
> Each node is :
> 
> 	-Single Xeon(R) CPU E5405  @ 2.00GHz (4 cores)
> 	-4 GB RAM
> 	-64 bits Distro-based (Debian Lenny)
> 	-3ware 9650 sataII-raid, with 1 logical drive in raid 5 mode,  the unit
> with 3 sata hardisk of 2TB wdc with 64MB of cache each one.
> 	-Xfs filesystem on each logical unit.
> 
> When i run the "genfiles.sh" test on each node in local (in the raid-5
> unit) mode i've have the follow results:
> 
> 	-3143 files created in 60 seconds.
> 
> and if i comment the "sync" line in the script:
> 
> 	-8947 files created in 60 seconds.
> 
> Now , with Gluster mounted (22TB) i run the test and the results are:
> 
> 	-1370 files created in 60 seconds.
> 
> Now, I'm running the cluster with standard distributed configuration,
> and i was making significant number of change in the test process , but
> i obtain the same number of wroted files all the time.
> Never more than 1400 files created, and 170mbits of network load (top).
> 
> The switching layer is gigabit (obviusly) , and there's no high
> resources being used , all is normal.
> 
> I'm using the 3.0.3 version of Gluster.
> 
> Here is my configuration file (only the last part of the file):
> 
> ##############################################################################
> volume distribute
>         type cluster/distribute
>         subvolumes 172.17.15.1-1 172.17.15.2-1 172.17.15.3-1
> 172.17.15.4-1 172.17.15.5-1 172.17.15.6-1
> end-volume
> 
> volume writebehind
>         type performance/write-behind
>        option cache-size 1MB
>         option flush-behind on
>         subvolumes distribute
> end-volume
> 
> volume readahead
>         type performance/read-ahead
>         option page-count 4
>         subvolumes writebehind
> end-volume
> 
> volume iocache
>         type performance/io-cache
>         option cache-size `grep 'MemTotal' /proc/meminfo  | awk '{print
> $2 * 0.2 / 1024}' | cut -f1 -d.`MB
> 
>         option cache-timeout 1
>         subvolumes readahead
> end-volume
> 
> volume iothreads
>         type performance/io-threads
>         option thread-count 32 # default is 16
>         subvolumes distribute
> end-volume
> 
> volume quickread
>     type performance/quick-read
>     option cache-timeout 1
>     option max-file-size 128kB
>     subvolumes iocache
> end-volume
> 
> volume statprefetch
>     type performance/stat-prefetch
>     subvolumes quickread
> end-volume
> ##############################################################################
> 
> Any idea or suggestion to make the performance goes up?
> Thanks everyone!
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 




More information about the Gluster-users mailing list