[Gluster-devel] Suggestion needed to make use of iobuf_pool as rdma buffer.

Anand Avati avati at gluster.org
Tue Jan 13 18:41:03 UTC 2015


3) Why not have a separate iobuf pool for RDMA?

On Tue Jan 13 2015 at 6:30:09 AM Mohammed Rafi K C <rkavunga at redhat.com>
wrote:

> Hi All,
>
> When using RDMA protocol, we need to register the buffer which is going
> to send through rdma with rdma device. In fact, it is a costly
> operation, and a performance killer if it happened in I/O path. So our
> current plan is to register pre-allocated iobuf_arenas from  iobuf_pool
> with rdma when rdma is getting initialized. The problem comes when all
> the iobufs are exhausted, then we need to dynamically allocate new
> arenas from libglusterfs module. Since it is created in libglusterfs, we
> can't make a call to rdma from libglusterfs. So we will force to
> register each of the iobufs from the newly created arenas with rdma in
> I/O path. If io-cache is turned on in client stack, then all the
> pre-registred arenas will use by io-cache as cache buffer. so we have to
> do the registration in rdma for each i/o call for every iobufs,
> eventually we cannot make use of pre registered arenas.
>
> To address the issue, we have two approaches in mind,
>
>  1) Register each dynamically created buffers in iobuf by bringing
> transport layer together with libglusterfs.
>
>  2) create a separate buffer for caching and offload the data from the
> read response to the cache buffer in background.
>
> If we could make use of preregister memory for every rdma call, then we
> will have approximately 20% increment for write and 25% of increment for
> read.
>
> Please give your thoughts to address the issue.
>
> Thanks & Regards
> Rafi KC
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150113/dc6420ab/attachment.html>


More information about the Gluster-devel mailing list