[Gluster-users] Performance
Todd Daugherty
todd at fotokem.hu
Thu Jun 10 16:07:18 UTC 2010
nothing. It did not change anything. Very strange, it is like there is
just HARD limit set at around 225 megabytes per second. Is there
anyone out there getting more than 225 megabytes per second read via
QDR/GlusterFS on one stream?
Todd
iozone -a -i0 -i1 -s 16384m -r 16384 iozone.$$.tmp
Iozone: Performance Test of File I/O
Version $Revision: 3.283 $
Compiled for 64 bit mode.
Build: linux
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
Erik Habbinga, Kris Strecker, Walter Wong.
Run began: Thu Jun 10 17:00:58 2010
Auto Mode
File size set to 16777216 KB
Record Size 16384 KB
Command line used: iozone -a -i0 -i1 -s 16384m -r 16384 iozone.10965.tmp
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
16777216 16384 489490 747651 245812 229806
iozone test complete.
On Tue, Jun 8, 2010 at 1:03 AM, Harshavardhana <harsha at gluster.com> wrote:
> On 06/07/2010 02:36 PM, Todd Daugherty wrote:
>>
>> I included a bunch of other info. If there is anything else that will
>> help solve this problem I am very interested in getting to the bottom
>> of this. Side note I built a GlusterFS cluster (2.0.9) from RAM disks
>> (10 gig /dev/ram0 bricks) That performs quite well but still much less
>> than the backend /dev/ram0 bricks.
>>
>> Todd
>>
>
> Todd,
>
> You need to increase the writebehind value to 8MB on each client volfile
> and read-ahead "page-count" to 8.
>
> Also set vm.swappiness = 0 through sysctl on both of the servers.
>
> Let us know how the performance shows, then we can tune things further.
>
> Regards
>
> --
> Harshavardhana
> Gluster Inc - http://www.gluster.com
> +1(408)-770-1887, Ext-113
> +1(408)-480-1730
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list