[Gluster-users] 40 gig ethernet

Bryan Whitehead driver at megahappy.net
Mon Jun 17 17:48:49 UTC 2013


I'm using the inbuilt Infiniband drivers that come with CentOS 6.x. I
did go through the pain of downloading an ISO from Mellanox and
installing all their specially built tools, went through their tuning
guide, and saw no speed improvements at all.

the IPoIB module cannot push the speeds like native RDMA - but I've
not been able to get gluster to work with infiniband correctly. (Get
massive CPU spikes from glusterd, filesystem stalls, and terrible
speeds - bassically native rdma was unusable). I've not tried the 3.4
branch yet (my native rdma attempts have all been with the 3.3.x
series). Anyway, I can completely blow out the raw speed of my
underlying RAID10 arrays across my boxes with IPoIB/Infiniband so it
doesn't matter.

I chose Infiniband because overall it is far cheaper than 10G cards
and associated switches (2 years ago). Prices have no moved enough for
me to bother with 10G.

On Sat, Jun 15, 2013 at 5:34 PM, Justin Clift <jclift at redhat.com> wrote:
> On 14/06/2013, at 8:13 PM, Bryan Whitehead wrote:
>> I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
>> times (from host 172.16.1.10):
>>
>> [root at node0.cloud ~]# ping -c 10 172.16.1.11
>> PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data.
>> 64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms
>> 64 bytes from 172.16.1.11: icmp_seq=2 ttl=64 time=0.113 ms
>> 64 bytes from 172.16.1.11: icmp_seq=3 ttl=64 time=0.163 ms
>> 64 bytes from 172.16.1.11: icmp_seq=4 ttl=64 time=0.125 ms
>> 64 bytes from 172.16.1.11: icmp_seq=5 ttl=64 time=0.125 ms
>> 64 bytes from 172.16.1.11: icmp_seq=6 ttl=64 time=0.125 ms
>> 64 bytes from 172.16.1.11: icmp_seq=7 ttl=64 time=0.198 ms
>> 64 bytes from 172.16.1.11: icmp_seq=8 ttl=64 time=0.171 ms
>> 64 bytes from 172.16.1.11: icmp_seq=9 ttl=64 time=0.194 ms
>> 64 bytes from 172.16.1.11: icmp_seq=10 ttl=64 time=0.115 ms
>
>
> Out of curiosity, are you using connected mode or datagram mode
> for this?  Also, are you using the inbuilt OS infiniband drivers,
> or Mellanox's OFED? (Or Intel/QLogic's equivalent if using
> their stuff)
>
> Asking because I haven't yet seen any real "best practise" stuff
> on ways to set this up for Gluster (yet). ;)
>
> Regards and best wishes,
>
> Justin Clift
>
> --
> Open Source and Standards @ Red Hat
>
> twitter.com/realjustinclift
>



More information about the Gluster-users mailing list