[Gluster-users] tips/nest practices for gluster rdma?
jclift at redhat.com
Wed Jul 10 19:00:59 UTC 2013
On 10/07/2013, at 7:49 PM, Matthew Nicholson wrote:
> Well, first of all,thank for the responses. The volume WAS failing over the tcp just as predicted,though WHY is unclear as the fabric is know working (has about 28K compute cores on it all doing heavy MPI testing on it), and the OFED/verbs stack is consistent across all client/storage systems (actually, the OS image is identical).
> Thats quiet sad RDMA isn't going to make 3.4. We put a good deal of hopes and effort around planning for 3.4 for this storage systems, specifically for RDMA support (well, with warnings to the team that it wasn't in/test for 3.3 and that all we could do was HOPE it was in 3.4 and in time for when we want to go live). we're getting "okay" performance out of IPoIB right now, and our bottle neck actually seems to be the fabric design/layout, as we're peaking at about 4.2GB/s writing 10TB over 160 threads to this distributed volume.
Out of curiosity, are you running the stock OS provided infiniband stack, or are you using the "vendor optimised" version? (eg "Mellanox OFED" if you're using Mellanox cards)
Asking because although I've not personally done any perf measurements between them, Mellanox swears the new v2 of their OFED stack is much higher performance than both the stock drivers or their v1 stack. IPoIB is especially tuned.
I'd really like to get around to testing that some time, but it won't be soon. :(
> When it IS ready and in 3.4.1 (hopefully!), having good docs around it, and maybe even a simple printf for the tcp failover would be huge for us.
Would you be ok to create a Bugzilla ticket, asking for that printf item?
It doesn't have to be complicated or super in depth or anything. :)
Asking because when something is a ticket, the "task" is much easier to hand
to someone so it gets done.
If that's too much effort though, just tell me what you'd like as the ticket
summary line + body text and I'll go create it. :)
Regards and best wishes,
Open Source and Standards @ Red Hat
More information about the Gluster-users