[Gluster-users] Performance

Harshavardhana harsha at gluster.com
Thu Jun 10 22:39:03 UTC 2010


On 06/10/2010 09:24 AM, Todd Daugherty wrote:
> well I just did. Using /dev/ram0.....
>
> but it is the almost the same ratio of slowness.
>
> write speeds
> 2.9 GB/s (local)
> 1.1 GB/s (via Gluster)
>
> 1.5 GB/s (local)
> .5 GB/s (via Gluster)
>
> read speeds
> 2.9 GB/s (local)
> .5 GB/s (via gluster)
>
> 1.4 GB/s (local)
> .2 GB/s (via gluster)
>
> How can I speed this up?
>
> I am moving large files. Average file size is 10 megabytes. Please
> anything would be great.
>
> dd if=/dev/zero of=/mnt/ramdisk/zero bs=1M count=8196 oflag=direct
> 8196+0 records in
> 8196+0 records out
> 8594128896 bytes (8.6 GB) copied, 2.96075 s, 2.9 GB/s
>    
Todd,

     Collect ibv_srq_pingpong results and check the actual bandwidth 
bet'n each infiniband HCA's.  Run IMPI benchmark from Open Fabrics to 
see the whole topology is performing at optimum.

Since what you have been doing a sequential read and write with iozone 
with lower record sizes you might need to increase the value to 128k. 
For the type of work load you are trying to generate, 16k is a really a 
small size.

Try doing it like 64k, 128k, 1M, 2M etc.

Since you are using 2.0.9 it doesn't have small write improvement yet, 
the path way for fuse under small chunk writes has large implications on 
performance.

Keep us posted.

Regards

-- 
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-1887, Ext-113
+1(408)-480-1730





More information about the Gluster-users mailing list