[Gluster-users] Performance
Todd Daugherty
todd at fotokem.hu
Thu Jun 10 16:24:36 UTC 2010
well I just did. Using /dev/ram0.....
but it is the almost the same ratio of slowness.
write speeds
2.9 GB/s (local)
1.1 GB/s (via Gluster)
1.5 GB/s (local)
.5 GB/s (via Gluster)
read speeds
2.9 GB/s (local)
.5 GB/s (via gluster)
1.4 GB/s (local)
.2 GB/s (via gluster)
How can I speed this up?
I am moving large files. Average file size is 10 megabytes. Please
anything would be great.
dd if=/dev/zero of=/mnt/ramdisk/zero bs=1M count=8196 oflag=direct
8196+0 records in
8196+0 records out
8594128896 bytes (8.6 GB) copied, 2.96075 s, 2.9 GB/s
iozone -a -i0 -i1 -s 8192m -r 16384 iozone.$$.tmp
Iozone: Performance Test of File I/O
Version $Revision: 3.283 $
Compiled for 64 bit mode.
Build: linux
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
Erik Habbinga, Kris Strecker, Walter Wong.
Run began: Thu Jun 10 17:14:33 2010
Auto Mode
File size set to 8388608 KB
Record Size 16384 KB
Command line used: iozone -a -i0 -i1 -s 8192m -r 16384 iozone.18192.tmp
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
8388608 16384 1048196 1201775 538439 523655
iozone test complete.
On Thu, Jun 10, 2010 at 6:07 PM, Todd Daugherty <todd at fotokem.hu> wrote:
> nothing. It did not change anything. Very strange, it is like there is
> just HARD limit set at around 225 megabytes per second. Is there
> anyone out there getting more than 225 megabytes per second read via
> QDR/GlusterFS on one stream?
>
> Todd
>
> iozone -a -i0 -i1 -s 16384m -r 16384 iozone.$$.tmp
> Iozone: Performance Test of File I/O
> Version $Revision: 3.283 $
> Compiled for 64 bit mode.
> Build: linux
>
> Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
> Al Slater, Scott Rhine, Mike Wisner, Ken Goss
> Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
> Randy Dunlap, Mark Montague, Dan Million,
> Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
> Erik Habbinga, Kris Strecker, Walter Wong.
>
> Run began: Thu Jun 10 17:00:58 2010
>
> Auto Mode
> File size set to 16777216 KB
> Record Size 16384 KB
> Command line used: iozone -a -i0 -i1 -s 16384m -r 16384 iozone.10965.tmp
> Output is in Kbytes/sec
> Time Resolution = 0.000001 seconds.
> Processor cache size set to 1024 Kbytes.
> Processor cache line size set to 32 bytes.
> File stride size set to 17 * record size.
> random
> random bkwd record stride
> KB reclen write rewrite read reread read
> write read rewrite read fwrite frewrite fread freread
> 16777216 16384 489490 747651 245812 229806
>
> iozone test complete.
>
>
>
>
> On Tue, Jun 8, 2010 at 1:03 AM, Harshavardhana <harsha at gluster.com> wrote:
>> On 06/07/2010 02:36 PM, Todd Daugherty wrote:
>>>
>>> I included a bunch of other info. If there is anything else that will
>>> help solve this problem I am very interested in getting to the bottom
>>> of this. Side note I built a GlusterFS cluster (2.0.9) from RAM disks
>>> (10 gig /dev/ram0 bricks) That performs quite well but still much less
>>> than the backend /dev/ram0 bricks.
>>>
>>> Todd
>>>
>>
>> Todd,
>>
>> You need to increase the writebehind value to 8MB on each client volfile
>> and read-ahead "page-count" to 8.
>>
>> Also set vm.swappiness = 0 through sysctl on both of the servers.
>>
>> Let us know how the performance shows, then we can tune things further.
>>
>> Regards
>>
>> --
>> Harshavardhana
>> Gluster Inc - http://www.gluster.com
>> +1(408)-770-1887, Ext-113
>> +1(408)-480-1730
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>
More information about the Gluster-users
mailing list