[Gluster-users] Throughout over infiniband

Corey Kovacs corey.kovacs at gmail.com
Fri Sep 7 13:45:48 UTC 2012


Folks,

I finally got my hands on a 4x FDR (56Gb) Infiniband switch and 4 cards to
do some testing of GlusterFS over that interface.

So far, I am not getting the throughput I _think_ I should see.

My config is made up of..

4 dl360-g8's (three bricks and one client)
4 4xFDR, dual port IB cards (one port configured in each card per host)
1 4xFDR 36 port Mellanox Switch (managed and configured)
GlusterFS 3.2.6
RHEL6.3

I have tested the IB cards and get about 6GB between hosts over raw IB.
Using ipoib, I can get about 22Gb/sec. Not too shabby for a first go but I
expected more (cards are in connected mode with MTU of 64k).

My raw speed to the disks (though the buffer cache...  I just realized I've
not tested direct mode IO, I'll do that later today) is about 800MB/sec. I
expect to see on the order of 2GB/sec (a little less than 3x800).

When I write a large stream using dd, and watch the bricks I/O I see
~800MB/sec on each one, but at the end of the test, the report from dd
indicates 800MB/sec.

Am I missing something fundamental?

Any pointers would be appreciated,


Thanks!


Corey
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120907/d0277892/attachment.html>


More information about the Gluster-users mailing list