[Gluster-users] Performance gluster 3.2.5 + QLogic Infiniband
Bryan Whitehead
driver at megahappy.net
Tue Apr 10 22:47:08 UTC 2012
dd if=/dev/zero of=/glustermount/deleteme.file bs=1M count=20000
conv=fsync oflag=sync
(above will make a nearly 20G file)
I'd try a blocksize of 1M. That should help a lot.
with my infiniband setup I found my performance was much better by
setting up a TCP network over infiniband and then using pure tcp as
the transport with my gluster volume. For the life of me I couldn't
get rdma to beat tcp.
Also, I found increasing the max io threads to 64 helped. (run 2-10
dd's at a time and you'll see the benefit).
On Tue, Apr 10, 2012 at 7:30 AM, michael at mayer.cx <michael at mayer.cx> wrote:
> Hi all,
>
>
>
> i am currently in the process of deploying gluster as a storage/scratch file
> system for a new HPC cluster.
>
>
>
> For storage I use HP storage arrays (12x2 TB disks, formatted with xfs,
> plain vanilla options)
>
> Performance seems to be ok as I am getting > 800 MB/sec when using hpdarm
> and "dd < /dev/zero > /path/to/storage/file bs=1024k count=100"
>
>
>
> The Infiniband fabric consists of QLE7342 cards and run the latest QLogic
> OFED (based on stock 1.5.3)
>
> Performance seems to be ok here as well as with osu_bw benchmarks I am
> reaching 3.2 GB/s uni-directionally.
>
> iperf reports 15 Gbps for ipoib which I think is not too bad
> either(connected mode, MTU 65520).
>
>
>
> The servers all run RHEL 5.8 and comprise of 2 X5690 CPUs and 24 GB RAM.
>
>
>
> Now, if I am creating a new volume locally (transport tcp,rdma) using one
> brick (about 8 TB size) on one of the storage hosts and mount it on the same
> hosts as a gluster mount (rdma or non-rdma does not matter), the read/write
> performance does not exceed 400 MB/s (doing the same simple dd test as
> above). Same is true if I am mounting it on another node. That means I am
> somehow missing about a factor of 2 in performance.
>
>
>
> I have been reading through the mailing list and the documentation as well
> and tried various options (tuning the storage, using various options with
> the gluster volume, etc...) but
>
>
>
> What could be the problem here ? Any pointers would be appreciated.
>
>
>
> Many thanks,
>
>
>
> Michael.
>
>
>
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list