[Gluster-users] Performance gluster 3.2.5 + QLogic Infiniband

michael at mayer.cx michael at mayer.cx
Tue Apr 10 14:30:05 UTC 2012


Hi all,

i am currently in the process of deploying gluster as a storage/scratch
file system for a new HPC cluster.

For storage I use HP storage arrays (12x2 TB disks, formatted with xfs,
plain vanilla options)
Performance seems to be ok as I am getting > 800 MB/sec when using hpdarm
and "dd < /dev/zero > /path/to/storage/file bs=1024k count=100"

The Infiniband fabric consists of QLE7342 cards and run the latest QLogic
OFED (based on stock 1.5.3)
Performance seems to be ok here as well as with osu_bw benchmarks I am
reaching 3.2 GB/s uni-directionally.
iperf reports 15 Gbps for ipoib which I think is not too bad
either(connected mode, MTU 65520).

The servers all run RHEL 5.8 and comprise of 2 X5690 CPUs and 24 GB RAM.


Now, if I am creating a new volume locally (transport tcp,rdma) using one
brick (about 8 TB size) on one of the storage hosts and mount it on the
same hosts as a gluster mount (rdma or non-rdma does not matter), the
read/write performance does not exceed 400 MB/s (doing the same simple dd
test as above). Same is true if I am mounting it on another node. That
means I am somehow missing about a factor of 2 in performance.

I have been reading through the mailing list and the documentation as well
and tried various options (tuning the storage, using various options with
the gluster volume, etc...) but

What could be the problem here ? Any pointers would be appreciated.

Many thanks,

Michael.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120410/b5db1a5b/attachment.html>


More information about the Gluster-users mailing list