[Bugs] [Bug 1539680] RDMA transport bricks crash

bugzilla at redhat.com bugzilla at redhat.com
Mon Jun 17 11:02:55 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1539680

Amar Tumballi <atumball at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Priority|unspecified                 |low
             Status|NEW                         |CLOSED
                 CC|                            |atumball at redhat.com
         Resolution|---                         |WONTFIX
        Last Closed|                            |2019-06-17 11:02:55



--- Comment #3 from Amar Tumballi <atumball at redhat.com> ---
Jiri,

Apologies for the delay.

Thanks for the report, but we are not able to look into the RDMA section
actively, and are seriously considering from dropping it from active support.

More on this @
https://lists.gluster.org/pipermail/gluster-devel/2018-July/054990.html


> ‘RDMA’ transport support:
> 
> Gluster started supporting RDMA while ib-verbs was still new, and very high-end infra around that time were using Infiniband. Engineers did work
> with Mellanox, and got the technology into GlusterFS for better data migration, data copy. While current day kernels support very good speed with
> IPoIB module itself, and there are no more bandwidth for experts in these area to maintain the feature, we recommend migrating over to TCP (IP
> based) network for your volume.
> 
> If you are successfully using RDMA transport, do get in touch with us to prioritize the migration plan for your volume. Plan is to work on this
> after the release, so by version 6.0, we will have a cleaner transport code, which just needs to support one type.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list