[Gluster-users] tips/nest practices for gluster rdma?

Matthew Nicholson matthew_nicholson at harvard.edu
Wed Jul 10 19:05:07 UTC 2013


justin,

yeah, this fabirc is all bran new mellanox, and all nodes are running their
v2 stack.

of for a beg report, sure thing. I was thinking i would tack on a comment
here:

https://bugzilla.redhat.com/show_bug.cgi?id=982757

since thats about the silent failure.

--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Research Computing
matthew_nicholson at harvard.edu



On Wed, Jul 10, 2013 at 3:00 PM, Justin Clift <jclift at redhat.com> wrote:

> On 10/07/2013, at 7:49 PM, Matthew Nicholson wrote:
> > Well, first of all,thank for the responses. The volume WAS failing over
> the tcp just as predicted,though WHY is unclear as the fabric is know
> working (has about 28K compute cores on it all doing heavy MPI testing on
> it), and the OFED/verbs stack is consistent across all client/storage
> systems (actually, the OS image is identical).
> >
> > Thats quiet sad RDMA isn't going to make 3.4. We put a good deal of
> hopes and effort around planning for 3.4 for this storage systems,
> specifically for RDMA support (well, with warnings to the team that it
> wasn't in/test for 3.3 and that all we could do was HOPE it was in 3.4 and
> in time for when we want to go live). we're getting "okay" performance out
> of IPoIB right now, and our bottle neck actually seems to be the fabric
> design/layout, as we're peaking at about 4.2GB/s writing 10TB over 160
> threads to this distributed volume.
>
> Out of curiosity, are you running the stock OS provided infiniband stack,
> or are you using the "vendor optimised" version?  (eg "Mellanox OFED" if
> you're using Mellanox cards)
>
> Asking because although I've not personally done any perf measurements
> between them, Mellanox swears the new v2 of their OFED stack is much higher
> performance than both the stock drivers or their v1 stack.  IPoIB is
> especially tuned.
>
> I'd really like to get around to testing that some time, but it won't be
> soon. :(
>
>
> > When it IS ready and in 3.4.1 (hopefully!), having good docs around it,
> and maybe even a simple printf for the tcp failover would be huge for us.
>
> Would you be ok to create a Bugzilla ticket, asking for that printf item?
>
>
> https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&component=rdma
>
> It doesn't have to be complicated or super in depth or anything. :)
>
> Asking because when something is a ticket, the "task" is much easier to
> hand
> to someone so it gets done.
>
> If that's too much effort though, just tell me what you'd like as the
> ticket
> summary line + body text and I'll go create it. :)
>
> Regards and best wishes,
>
> Justin Clift
>
> --
> Open Source and Standards @ Red Hat
>
> twitter.com/realjustinclift
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130710/32247700/attachment.html>


More information about the Gluster-users mailing list