[Gluster-users] rdma or tcp?
beat at 0x1b.ch
Wed Apr 6 05:56:42 UTC 2011
On 05.04.11 15:35, isdtor wrote:
> On 4 April 2011 22:59, Anand Babu Periasamy <ab at gluster.com> wrote:
>> > My recommendation will be TCP. For most application needs TCP is just
>> > fine. On 1GigE, TCP/IP running host CPU is hardly a bottleneck.
> Thank you for the detailed assessment, Anand!
There are some 10GigE adapters supporting RDMA (iWarp). Even then, you
mean Infiniband when talking about RDMA. 10GigE Switchports are too
expensive today to be a valuable alternative to Infiniband hardware for
this specific usage case.
Using multiple multiple clients per brick you get up to ~3GBytes/s
throughput using state of the art Infiniband. I have rather good numbers
when using 4-8 clients and a brick with enough memory to hold the data
or a superfast RAID.
An additional benefit is the much lower latency. Accessing small files
is much, much faster using RDMA compared to TCP. Sadly I have no
benchmark numbers, but the feeling using a shell is completely different.
My conclusion: In case you have GigE any you are happy with the speed,
keep on using it. There are no alternatives and the performance is
rather good. CPU load of TCP is today no longer an issue, we have enough
power in our systems.
When you are considering 10GigE, probably you will continue using TCP.
The userbase of iWarp over 10GigE is rather small, I never heard of any
success story out of the real world. Don't buy 10GigE equipment when
plan to use it exclusively for GlusterFS.
When you are looking for a fast and cheap fabric for your storage keep
an eye on Infiniband.
\|/ Beat Rubischon <beat at 0x1b.ch>
( 0-0 ) http://www.0x1b.ch/~beat/
Meine Erlebnisse, Gedanken und Traeume: http://www.0x1b.ch/blog/
More information about the Gluster-users