[Gluster-users] 40 gig ethernet
Justin Clift
jclift at redhat.com
Sun Jun 16 00:34:04 UTC 2013
On 14/06/2013, at 8:13 PM, Bryan Whitehead wrote:
> I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
> times (from host 172.16.1.10):
>
> [root at node0.cloud ~]# ping -c 10 172.16.1.11
> PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data.
> 64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms
> 64 bytes from 172.16.1.11: icmp_seq=2 ttl=64 time=0.113 ms
> 64 bytes from 172.16.1.11: icmp_seq=3 ttl=64 time=0.163 ms
> 64 bytes from 172.16.1.11: icmp_seq=4 ttl=64 time=0.125 ms
> 64 bytes from 172.16.1.11: icmp_seq=5 ttl=64 time=0.125 ms
> 64 bytes from 172.16.1.11: icmp_seq=6 ttl=64 time=0.125 ms
> 64 bytes from 172.16.1.11: icmp_seq=7 ttl=64 time=0.198 ms
> 64 bytes from 172.16.1.11: icmp_seq=8 ttl=64 time=0.171 ms
> 64 bytes from 172.16.1.11: icmp_seq=9 ttl=64 time=0.194 ms
> 64 bytes from 172.16.1.11: icmp_seq=10 ttl=64 time=0.115 ms
Out of curiosity, are you using connected mode or datagram mode
for this? Also, are you using the inbuilt OS infiniband drivers,
or Mellanox's OFED? (Or Intel/QLogic's equivalent if using
their stuff)
Asking because I haven't yet seen any real "best practise" stuff
on ways to set this up for Gluster (yet). ;)
Regards and best wishes,
Justin Clift
--
Open Source and Standards @ Red Hat
twitter.com/realjustinclift
More information about the Gluster-users
mailing list