[Gluster-devel] rpc problems when using syncops in callbacks
Krishnan Parthasarathi
kparthas at redhat.com
Mon Apr 29 11:57:43 UTC 2013
Fog,
I hope I have caught your attention before you have decided to start
over using STACK_WIND/STACK_UNWIND (async way)
macros and drop the syncop approach.
On 04/29/2013 05:03 PM, fog - wrote:
> Hello Krish,
>
> yes, no deadlock occurs without blocking (... somewhat obviously).
> However, if I can't block I do not gain anything regarding code
> readability. It makes more sense to use the standard STACK_WIND /
> UNWIND pairs instead of creating a syncthread with a callback function.
>
> I could use this approach if I start the syncthread in the FOP instead
> of the CBK function (and use syncops for everything). The problem is
> that in my scenario only the return of the FOP will tell me if
> additional FOPs need to be executed (and 99%+ of the time this won't
> be the case). This makes spawning a syncthread every time sound like a
> bad idea.
Making the fop use the synctask framework DOESN'T mean you are going to
spawn a new thread everytime!It actually means you are scheduling
another task into the synctask environment. All you need to ensure is
that you don't 'yield' while on the epoll thread.
You could take a look at how mgmt/glusterd[1] xlator uses the synctask
framework to provide synchronous wrappers to the mgmt operations. In
glusterd, the rpc programs have been marked for using synctask at the
rpcsvc layer. What this means is, that each rpc request would be run in
a synctask and the epoll thread returns to listening for newer network
events. In this approach, you have the guarantee that epoll is not held
hostage when you yield (never!). And all your code looks pretty and
synchronous.
This approach may not work too well for you. But I am just saying how I
got across the synctask 'barrier' (pun not intended) while making
glusterd's mgmt operations (appear) synchronous.
[1] - xlators/mgmt/glusterd/src/glusterd-syncop.c
HTH,
krish
>
> I think you identified the occurring deadlock in your other reply
> correctly by the way. Seems it's a bit more complicated to use syncops
> correctly than I originally assumed, I'll probably go back to
> STACK_WIND / UNWIND chains even if the resulting code is quite messy.
>
> Thanks for your insight
> ~fog
>
> ------------------------------------------------------------------------
> Date: Mon, 29 Apr 2013 14:30:50 +0530
> From: kparthas at redhat.com
> To: fog_is_my_name at hotmail.com
> CC: gluster-devel at nongnu.org
> Subject: Re: [Gluster-devel] rpc problems when using syncops in callbacks
>
> Fog,
>
> On 04/29/2013 01:57 PM, fog - wrote:
>
> Hello Avati,
>
> I am wrapping the syncop call in a synctask_new (otherwise
> glusterFS will run into a null pointer @ synctask_get in the
> SYNCOP macro & crash). Below is some code to show how I do it
> currently to test the syncops.
>
> typedef struct{
> xlator_t *this; loc_t *loc; dict_t *dic;
> }syncstore_args;
>
> int32_t __xattr_store_sync(void* data)
> {
> syncstore_args *args = (syncstore_args*)data;
> return syncop_setxattr(FIRST_CHILD(args->this), args->loc,
> args->dic, 0);
> }
>
> int32_t xattr_store_sync(xlator_t *this, call_frame_t *frame,
> loc_t *loc, dict_t *dic)
> {
> syncstore_args args = {this, loc, dic};
> return synctask_new(this->ctx->env, __xattr_store_sync, NULL,
> NULL, &args);
>
> If you don't provide a synctask_cbk_t to synctask_new, you are using
> synctask in a 'blocking' mode.
> That is, the thread calling synctask_new would block until the
> synctask_fn_t function (ie, __xattr_store_sync) returns.
> An alternative way to do this would be,
>
> int32_t xattr_store_sync(xlator_t *this, call_frame_t *frame, loc_t
> *loc, dict_t *dic)
> {
> syncstore_args args = {this, loc, dic};
> return synctask_new(this->ctx->env, __xattr_store_sync,
> __xattr_store_sync_cbk, NULL, &args);
> }
>
> int32_t __xattr_store_sync_cbk (int ret, /*and the other args*/)
> {
> // Your code goes here
> return ret;
> }
>
> Now, all file operations performed using syncop_* inside
> __xattr_store_sync would have the synchronous flavour, while leaving
> the calling thread (thread calling xattr_store_sync fn) 'free'. This
> should avoid the hang issue.
>
> HTH,
> krish
>
>
> }
>
> ------------------------------------------------------------------------
> Date: Mon, 29 Apr 2013 00:19:11 -0700
> Subject: Re: [Gluster-devel] rpc problems when using syncops in
> callbacks
> From: anand.avati at gmail.com <mailto:anand.avati at gmail.com>
> To: fog_is_my_name at hotmail.com <mailto:fog_is_my_name at hotmail.com>
> CC: gluster-devel at nongnu.org <mailto:gluster-devel at nongnu.org>
>
> Note that you need to place your syncop code in a synctask
> function strictly within a syncenv (by calling synctask_new().
> You're probably calling syncop_XXX() directly in your xlator code?
>
> Avati
>
>
> On Fri, Apr 26, 2013 at 2:40 AM, fog - <fog_is_my_name at hotmail.com
> <mailto:fog_is_my_name at hotmail.com>> wrote:
>
> Hello everyone,
>
> I am trying to use syncops in a custom translator to keep my
> code at least borderline readable, but I am having limited
> success.
>
> Problem Symptoms:
> Using a syncop in a regular fop is fine. However, in a
> callback it causes a 'freeze' (synctask_yield called by the
> SYNCOP macro doesn't return).
>
> What seems to be the Problem:
> Looking at the traces, there is no corresponding trace from
> rpc_clnt_reply_init on the client to the trace from
> rpcsvc_submit_generic on the server. In other words, the rpc
> reply gets sent but isn't correctly received. Obviously this
> is not really a networking problem but something else... I'd
> guess it's a deadlock somewhere on the client?
> From the point of the syncop call onwards the client doesn't
> 'get' any rpc replies any more (the next GlusterFS Handshake
> sent by the client, which is received by the server and
> replied to, leads to a disconnection accordingly).
>
> Again: This problem is only occurring when calling a syncop
> from a callback function inside my translator, if I call the
> same syncop in a fop call it completes fine.
>
> I hope you can make sense out of the above problem description.
> Thanks for your time ~
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org <mailto:Gluster-devel at nongnu.org>
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org <mailto:Gluster-devel at nongnu.org>
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20130429/d3e3602c/attachment-0001.html>
More information about the Gluster-devel
mailing list