[Gluster-users] Direct I/O access performance with GLFS2 rc8
Hideo Umemura(Max-T)
h.umemura at max-t.jp
Thu Apr 23 02:39:56 UTC 2009
Hello,
I did simple tryout for GLFS 2.0rc8 on CentOS Linux with 2x Dual Core
Xeon 4GB RAM.
My benchmarks, it is high load single stream access on loopback glfs
mount on a single server with high performance FC RAID.
Target volume is XFS formatted. Local benchmark results are as
follows.(benchmark command is XDD command)
Buf I/O
READ = about 660MB/s
WRITE = about 480MB/s
Direct I/O
4MB block read = about 540MB/s
4MB block write = about 350MB/s
The results for GLFS loopback mount volume are as follows.
Buf I/O
READ = about 460MB/s
WRITE = about 330MB/s
Direct I/O
4MB block read = about 160MB/s
4MB block write = about 100MB/s
Buf I/O with GLFS is good results with small block size.
But The large block size access is slows down.
Direct I/O is poor performance without follow the block size.
Attached please find a detailed information text.
I want to use glfs with professional video applications on IB networks.
Video applications are using storages with large uncompress image
sequences and/or uncompress large movie files.(up to 2K/4K)
The block size control and direct I/O performance are important for them.
Please advise me about options/configurations for improve the
performance, and theory for improve performance by block size on GLFS.
My best regards,
hideo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090423/9941faae/attachment.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: xddbench.rc8.log
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090423/9941faae/attachment.ksh>
More information about the Gluster-users
mailing list