[Gluster-devel] [puzzle] readv operation allocate iobuf twice
Zhengping Zhou
johnzzpcrystal at gmail.com
Tue Jul 12 23:51:16 UTC 2016
I have all ready filed a bug with bugid 1354205, but my current patch
still has problem in my test environment, I'll check it out and post
later.
2016-07-12 12:38 GMT+08:00 Raghavendra Gowdappa <rgowdapp at redhat.com>:
>
>
> ----- Original Message -----
>> From: "Zhengping Zhou" <johnzzpcrystal at gmail.com>
>> To: gluster-devel at gluster.org
>> Sent: Tuesday, July 12, 2016 9:28:01 AM
>> Subject: [Gluster-devel] [puzzle] readv operation allocate iobuf twice
>>
>> Hi all:
>>
>> It is a puzzle to me that we allocate rsp buffers for rspond
>> content in function client3_3_readv, but these rsp parameters hasn't
>> ever been saved to struct saved_frame in submit procedure.
>
> Good catch :). We were aware of this issue, but the fix wasn't prioritized. Can you please file a bug on this? If you want to send a fix (which essentially stores the rsp payload ptr in saved-frame and passes it down during rpc_clnt_fill_request_info - as part of handling RPC_TRANSPORT_MAP_XID_REQUEST event in rpc-clnt), please post a patch to gerrit and I'll accept it. If you don't have bandwidth, one of us can send out a fix too.
>
> Again, thanks for the effort :).
>
> regards,
> Raghavendra
>
>> Which means
>> the iobuf will reallocated by transport layer in function
>> __socket_read_accepted_successful_reply.
>> According to the commnet of fucntion rpc_clnt_submit :
>> 1. Both @rsp_hdr and @rsp_payload are optional.
>> 2. The user of rpc_clnt_submit, if wants response hdr and payload in its
>> own
>> buffers, then it has to populate @rsphdr and @rsp_payload.
>> ....
>> The rsp_payload is optional, ransport layer will not reallocate
>> rsp buffers if
>> it populated. But the fact is readv operation will allocate rsp buffer twice.
>>
>> Thanks
>> Zhengping
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
More information about the Gluster-devel
mailing list