[Gluster-users] Infiniband performance issues answered?
Sabuj Pattanayek
sabujp at gmail.com
Tue Dec 18 15:29:14 UTC 2012
i think qperf just writes to and from memory on both systems so that
it can best test the network and not disk, then tosses the packets
away
On Tue, Dec 18, 2012 at 3:34 AM, Andrew Holway <a.holway at syseleven.de> wrote:
>
> On Dec 18, 2012, at 2:15 AM, Sabuj Pattanayek wrote:
>
>> I have R610's with a similar setup but with HT turned on and I'm
>> getting 3.5GB/s for one way RDMA tests between two QDR connected
>> clients using mellanox connectx x4 PCI-E cards in x8 slots. 1GB/s with
>> IPoIB connections (seem to be limited to 10gbe). Note, I had problems
>> with the 1.x branch of OFED and am using the latest 3.x RC .
>
> What are you writing to and from?
>
>
>
>>
>> On Mon, Dec 17, 2012 at 6:44 PM, Joe Julian <joe at julianfamily.org> wrote:
>>> In IRC today, someone who was hitting that same IB performance ceiling that
>>> occasionally gets reported had this to say
>>>
>>> [11:50] <nissim> first, I ran fedora which is not supported by Mellanox OFED
>>> distro
>>> [11:50] <nissim> so I moved to CentOS 6.3
>>> [11:51] <nissim> next I removed all distibution related infiniband rpms and
>>> build the latest OFED package
>>> [11:52] <nissim> disabled ServerSpeed service
>>> [11:52] <nissim> disabled BIOS hyperthreading
>>> [11:52] <nissim> disabled BIOS power mgmt
>>> [11:53] <nissim> ran ib_write_test and goot 5000MB/s
>>> [11:53] <nissim> got 5000MB/s on localhost
>>>
>>> fwiw, if someone's encountering that issue, between this and the changes
>>> since 3.4.0qa5 it might be worth knowing about.
>>>
>>> http://irclog.perlgeek.de/gluster/2012-12-17#i_6251387
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
More information about the Gluster-users
mailing list