[Gluster-users] Configuration suggestions (aka poor/slow performance on new hardware)

Ramiro Magallanes listas at sabueso.org
Fri Mar 26 17:04:07 UTC 2010


	Hello there!

Im working on a 6-nodes cluster, with SuperMicro new hardware.
The cluster have to store a millons of JPG's about (200k-4MB),and little
text files.

Each node is :

	-Single Xeon(R) CPU E5405  @ 2.00GHz (4 cores)
	-4 GB RAM
	-64 bits Distro-based (Debian Lenny)
	-3ware 9650 sataII-raid, with 1 logical drive in raid 5 mode,  the unit
with 3 sata hardisk of 2TB wdc with 64MB of cache each one.
	-Xfs filesystem on each logical unit.

When i run the "genfiles.sh" test on each node in local (in the raid-5
unit) mode i've have the follow results:

	-3143 files created in 60 seconds.

and if i comment the "sync" line in the script:

	-8947 files created in 60 seconds.

Now , with Gluster mounted (22TB) i run the test and the results are:

	-1370 files created in 60 seconds.

Now, I'm running the cluster with standard distributed configuration,
and i was making significant number of change in the test process , but
i obtain the same number of wroted files all the time.
Never more than 1400 files created, and 170mbits of network load (top).

The switching layer is gigabit (obviusly) , and there's no high
resources being used , all is normal.

I'm using the 3.0.3 version of Gluster.

Here is my configuration file (only the last part of the file):

##############################################################################
volume distribute
        type cluster/distribute
        subvolumes 172.17.15.1-1 172.17.15.2-1 172.17.15.3-1
172.17.15.4-1 172.17.15.5-1 172.17.15.6-1
end-volume

volume writebehind
        type performance/write-behind
       option cache-size 1MB
        option flush-behind on
        subvolumes distribute
end-volume

volume readahead
        type performance/read-ahead
        option page-count 4
        subvolumes writebehind
end-volume

volume iocache
        type performance/io-cache
        option cache-size `grep 'MemTotal' /proc/meminfo  | awk '{print
$2 * 0.2 / 1024}' | cut -f1 -d.`MB

        option cache-timeout 1
        subvolumes readahead
end-volume

volume iothreads
        type performance/io-threads
        option thread-count 32 # default is 16
        subvolumes distribute
end-volume

volume quickread
    type performance/quick-read
    option cache-timeout 1
    option max-file-size 128kB
    subvolumes iocache
end-volume

volume statprefetch
    type performance/stat-prefetch
    subvolumes quickread
end-volume
##############################################################################

Any idea or suggestion to make the performance goes up?
Thanks everyone!




More information about the Gluster-users mailing list