[Gluster-users] Direct I/O access performance with GLFS2 rc8

Anand Avati avati at gluster.com
Thu Apr 23 08:22:32 UTC 2009


It is unfair to expect a high throughput for IO with O_DIRECT on a
network filesystem. The fact that a loopback interface is in picture
can influence some bias in opinion, but in reality, if you were to
compare GlusterFS direct IO performance with direct IO performance of
any other network filesystem, you will realize that the differences
are not too much. Any kind of trickery, any filesystem might possibly
do to improve direct IO performance will result in breakage of the
semantics.

Along those lines, there _is_ a trickery what you can do with
GlusterFS. I presume that your application is opening in O_DIRECT to
avoid the write data filling up page cache. If it is just for the
commitment of data being written to disk, you can just do an fsync().
So you can make a small change in write-behind.c to let it actually do
background writes on files which are opened/created in O_DIRECT mode.
This will not eat up page cache (even if it were opened without
O_DIRECT). The O_DIRECT flag itself will ensure that on the server
side data is hitting the disk. The effect of write-behind will only be
pipelining write calls, and write-behind by default ensures that all
writes have reached the server side before close() on the file
descriptor returns (unless you turn on flush-behind option).

Hope that might help.

Avati


> I did simple tryout for GLFS 2.0rc8 on CentOS Linux with 2x Dual Core Xeon
> 4GB RAM.
> My benchmarks, it is high load single stream access on loopback glfs mount
> on a single server with high performance FC RAID.
> Target volume is XFS formatted. Local benchmark results are as
> follows.(benchmark command is XDD command)
>
> Buf I/O
> READ = about 660MB/s
> WRITE = about 480MB/s
>
> Direct I/O
> 4MB block read = about 540MB/s
> 4MB block write = about 350MB/s
>
> The results for GLFS loopback mount volume are as follows.
>
> Buf I/O
> READ = about 460MB/s
> WRITE = about 330MB/s
>
> Direct I/O
> 4MB block read = about 160MB/s
> 4MB block write = about 100MB/s
>
> Buf I/O with GLFS is good results with small block size.
> But The large block size access is slows down.
> Direct I/O is poor performance without follow the block size.
> Attached please find a detailed information text.
>
> I want to use glfs with professional video applications on IB networks.
> Video applications are using storages with large uncompress image sequences
> and/or uncompress large movie files.(up to 2K/4K)
> The block size control and direct I/O performance are important for them.
>
> Please advise me about options/configurations for improve the performance,
> and theory for improve performance by block size on GLFS.
>
> My best regards,
>
> hideo
>




More information about the Gluster-users mailing list