[Gluster-users] Gluster 3.3.0 on CentOS 6 - GigabitEthernet vs InfiniBand

Marko Vendelin marko.vendelin at gmail.com
Thu Oct 18 10:52:18 UTC 2012


I wonder whether you benchmarked the performance from a single node or
stressed your Gluster storage from multiple clients? We have see on
our small gluster installation that you get the full performance out
of gluster only when many clients try to read or write at the same
time. Otherwise, you could be just FUSE-limited on the client level,
not on the network level. If you did dd test from a single client, try
to run it simultaneously on multiple machines. On our case, we've got
fully hardware-limited performance out of Gluster by using ~40
clients.

Re iozone: probably, the test results would depend on whether you use
striping / replicas or not. Without striping, iozone test file would
be created on a single brick and you would test an access to that
particular brick (which could be different every time a new file is
created).

Re network: would be very interesting to know the details and whether
those rates would be the same for gigabit and IB networks when you
stress gluster servers from many clients. We are planning to use
gluster running over IB network in our new cluster, so it would be
vital to know whether IB allows to use storage computer RAM caches
fully (iozone with the smaller files).

cheers,

marko

On Thu, Oct 18, 2012 at 10:48 AM, Bartek Krawczyk
<bbartlomiej.mail at gmail.com> wrote:
> On 18 October 2012 08:44, Ling Ho <ling at slac.stanford.edu> wrote:
>> When you mount using rdma, try running some network tools like iftop to see
>> if traffics is going through your Ge interface.
>>
>> If your volume is created with both tcp and rdma, my experience is rdma does
>> not work under 3.3.0 and it will always fall back to tcp.
>>
>> However ipoib works fine for us. Again you should check where the traffics
>> go.
> I used tcpdump and iftop and confirmed the traffic using IPoIB goes
> through ib0 interface, not eth.
> When I use rdma transport the traffic doesn't show on neither eth nor
> ib0 interface - so I guess it's correctly using RDMA.
> I re-ran the iozone -a tests and they're the same. In addition I did a
> "dd" test fo read and write on IPoIB and RDMA mounted volume.
>
> IPoIB:
> [root at master gluster3]# dd if=/dev/zero of=test bs=100M count=50
> 50+0 przeczytanych recordów
> 50+0 zapisanych recordów
> skopiowane 5242880000 bajtów (5,2 GB), 16,997 s, 308 MB/s
>
> [root at master gluster3]# dd if=test of=/dev/null
> 10240000+0 przeczytanych recordów
> 10240000+0 zapisanych recordów
> skopiowane 5242880000 bajtów (5,2 GB), 28,4185 s, 184 MB/s
>
>
> RDMA:
> [root at master gluster]# dd if=/dev/zero of=test bs=100M count=50
> 50+0 przeczytanych recordów
> 50+0 zapisanych recordów
> skopiowane 5242880000 bajtów (5,2 GB), 70,3636 s, 74,5 MB/s
>
> [root at master gluster]# dd if=test of=/dev/null
> 10240000+0 przeczytanych recordów
> 10240000+0 zapisanych recordów
> skopiowane 5242880000 bajtów (5,2 GB), 10,8389 s, 484 MB/s
>
> I did a "sync" between those tests. Funny isn't it? RDMA is much
> slower than IPoIB on writing and faster on reading.
>
> I think we'll stick to the IPoIB until something's fixed in glusterfs.
> And still - why the results are so similar to Gigabit Ethernet?
>
> Regards
>
> --
> Bartek Krawczyk
> network and system administrator
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list