[Gluster-users] Infiniband performance issues answered?

Sabuj Pattanayek sabujp at gmail.com
Tue Dec 18 01:17:06 UTC 2012


and yes on some Dells you'll get strange network and RAID controller
performance characteristics if you turn on the BIOS power management.

On Mon, Dec 17, 2012 at 7:15 PM, Sabuj Pattanayek <sabujp at gmail.com> wrote:
> I have R610's with a similar setup but with HT turned on and I'm
> getting 3.5GB/s for one way RDMA tests between two QDR connected
> clients using mellanox connectx x4 PCI-E cards in x8 slots. 1GB/s with
> IPoIB connections (seem to be limited to 10gbe). Note, I had problems
> with the 1.x branch of OFED and am using the latest 3.x RC .
>
> On Mon, Dec 17, 2012 at 6:44 PM, Joe Julian <joe at julianfamily.org> wrote:
>> In IRC today, someone who was hitting that same IB performance ceiling that
>> occasionally gets reported had this to say
>>
>> [11:50] <nissim> first, I ran fedora which is not supported by Mellanox OFED
>> distro
>> [11:50] <nissim> so I moved to CentOS 6.3
>> [11:51] <nissim> next I removed all distibution related infiniband rpms and
>> build the latest OFED package
>> [11:52] <nissim> disabled ServerSpeed service
>> [11:52] <nissim> disabled BIOS hyperthreading
>> [11:52] <nissim> disabled BIOS power mgmt
>> [11:53] <nissim> ran ib_write_test and goot 5000MB/s
>> [11:53] <nissim> got 5000MB/s on localhost
>>
>> fwiw, if someone's encountering that issue, between this and the changes
>> since 3.4.0qa5 it might be worth knowing about.
>>
>> http://irclog.perlgeek.de/gluster/2012-12-17#i_6251387
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list