[Gluster-users] Throughout over infiniband

Gluster Mailing List gluster.helpshift at gmail.com
Mon Oct 22 19:46:18 UTC 2012


Corey,

Make sure to test with direct I/O, otherwise the caching can give you unrealistic expectations of your actual throughput.  Typically, using the ipoib driver is not recommended with Infiniband since you will introduce unnecessary overhead via TCP.

Knowing how you have Gluster configured is also essential to understanding whether any metrics you get from testing are within expectations.  Including the output from `gluster volume info` is an essential piece of information.

Thanks,

Eco

On Fri, Sep 7, 2012 at 1:45 AM,Corey Kovacs <corey.kovacs at gmail.com>wrote:

> Folks, 
> 
> I finally got my hands on a 4x FDR (56Gb) Infiniband switch and 4 cards to do some testing of GlusterFS over that interface.
> 
> So far, I am not getting the throughput I _think_ I should see.
> 
> My config is made up of..
> 
> 4 dl360-g8's (three bricks and one client)
> 4 4xFDR, dual port IB cards (one port configured in each card per host)
> 1 4xFDR 36 port Mellanox Switch (managed and configured)
> GlusterFS 3.2.6
> RHEL6.3
> 
> I have tested the IB cards and get about 6GB between hosts over raw IB. Using ipoib, I can get about 22Gb/sec. Not too shabby for a first go but I expected more (cards are in connected mode with MTU of 64k).
> 
> My raw speed to the disks (though the buffer cache...  I just realized I've not tested direct mode IO, I'll do that later today) is about 800MB/sec. I expect to see on the order of 2GB/sec (a little less than 3x800).
> 
> When I write a large stream using dd, and watch the bricks I/O I see ~800MB/sec on each one, but at the end of the test, the report from dd indicates 800MB/sec. 
> 
> Am I missing something fundamental?
> 
> Any pointers would be appreciated,
> 
> 
> Thanks!
> 
> 
> Corey

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121022/25dfb110/attachment.html>


More information about the Gluster-users mailing list