[Gluster-devel] Reduce memcpy in glfs read and write

Sachin Pandit spandit at commvault.com
Mon Jun 20 22:37:49 UTC 2016


Hi all,

I bid adieu to you all with the hope of crossing path again, and the time has come rather quickly. It feels great to work on GlusterFS again.

Currently we are trying to write data backed up by Commvault Simpana to glusterfs volume (Disperse volume). To improve the performance, I have implemented the proposal put forward my Rafi  K C [1]. I have some questions regarding libgfapi and iobuf pool.

To reduce an extra level of copy in glfs read and write, I have implemented few APIs to request a buffer (similar to the one represented in  [1]) from iobuf pool which can be used by the application to write data to. With this implementation, when I try to reuse the buffer for consecutive writes, I could see a hang in syncop_flush of glfs_close (BT of the hang can be found in [2]). I wanted to know if reusing the buffer is recommended. If not, do we need to request buffer for each writes?

Setup : Distributed-Disperse ( 4 * (2+1)). Bricks scattered over 3 nodes.

[1] http://www.gluster.org/pipermail/gluster-devel/2015-February/043966.html
[2] Attached file -  bt.txt

Thanks & Regards,
Sachin Pandit.



***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160620/c416b1cf/attachment.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: bt.txt
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160620/c416b1cf/attachment.txt>


More information about the Gluster-devel mailing list