[Gluster-devel] Is it safe to use synctask_{wake, yield} outside of __wake and __yield macros?
Krishnan Parthasarathi
kparthas at redhat.com
Fri Feb 15 19:23:22 UTC 2013
----- Original Message -----
>
>
>
> On Wed, Feb 13, 2013 at 8:37 PM, krish < kparthas at redhat.com > wrote:
>
>
>
>
> The strategy is to defer yield'ing of the task till a mgmt operation
> is sent to all the peers.
>
> If I understand it right, the following theorem is true,
> - A function which begins execution in a synctask (ie. thread from
> syncenv's thread pool), would always
> resume execution (ie, wake) in the same thread (viz. part of
> syncenv's thread pool).
>
>
>
> Nope! A synctask can yield in one syncproc pthread and resume in a
> different one. But that should not matter for you as long as it is
> any one of those syncproc pthreads from the pool.
Ok, got it.
>
>
>
>
> If the above theorem is correct, then all syncops performed by mgmt
> operation handlers are
> guaranteed to be called from a syncenv. The synctask is spawned at
> rpcsvc layer for all glusterd mgmt
> operations programs.
>
> Following is an example code snippet.
>
> <code>
> gd_lock_op_phase (struct list_head *peers, char **op_errstr, int
> npeers) {
>
> ....
>
> list_for_each_entry (peerinfo, peers, op_peers_list) {
> gd_syncop_mgmt_lock (peerinfo->rpc, aggr, index,
> MY_UUID, peer_uuid);
> index++;
> }
> synctask_yield (aggr->task);
> //note: block in the above line until all the cbks return - call_cnt
> mechanism
>
> ....
>
> }
>
> #define GD_SYNCOP(rpc, stb, cbk, req, prog, procnum, xdrproc) do { \
> int ret = 0; \
> struct synctask *task = NULL; \
> \
> task = synctask_get (); \
> stb->task = task; \
> ret = gd_syncop_submit_request (rpc, req, stb, \
> prog, procnum, cbk, \
> (xdrproc_t)xdrproc); \
> // note: yield here has been removed
> } while (0)
>
> </code>
>
>
>
>
>
>
> Where/how is callcnt set, and decremented?
The following are the two structures which hold 'state' information of
mgmt ops in progress on a synctask,
struct gd_aggr_ {
int call_cnt;
int npeers;
gf_lock_t lock;
int op_ret;
int op_errno;
struct synctask *task;
struct syncargs **args;
};
typedef struct gd_aggr_ gd_aggr_t;
struct gd_local_ {
int index;
gd_aggr_t *aggr;
};
typedef struct gd_local_ gd_local_t;
A typical mgmt operation's callbk would look as follows,
...
out:
LOCK (&aggr->lock);
{
call_cnt = aggr->call_cnt--;
}
UNLOCK (&aggr->lock);
gd_local_wipe (local);
STACK_DESTROY (frame->root);
if (call_cnt) {
for (i = 0; i < aggr->npeers; i++){
if (aggr->args[index]->op_ret) {
aggr->op_ret = -1;
break;
}
}
synctask_wake (aggr->task);
...
gd_aggr_t is initialised before we start issuing the batch of same kind
of mgmt operation (eg. lock) to all the peers. Each frame's local has a
pointer to the same single instance of gd_aggr_t. This makes it possible to
have a call_cnt mechanism across frames without another frame being used.
Hope that answers your question.
Krish
>
>
> Avati
>
>
>
>
> Let me know if we can come up with a generic framework for what I am
> trying to do here.
>
> thanks,
> krish
>
>
>
> On 02/14/2013 05:23 AM, Anand Avati wrote:
>
>
>
>
> So you are not using the SYNCOP() macro, right? Can you show a code
> snippet of how you are trying to fan-out and yield? We could
> probably come up with a generic framework for such
> fan-out->yield->wake pattern.
>
>
> You should be able to call syncop_yield() instead of __yield() if you
> are _sure_ that the caller is going to be from within a syncenv.
>
>
> Avati
>
>
> On Wed, Feb 13, 2013 at 11:29 AM, Krishnan Parthasarathi <
> kparthas at redhat.com > wrote:
>
>
> In glusterd, I am trying to perform a series of syncops in a batch.
> ie, yield the thread
> once all the non-blocking operations are queued. The wakeup back to
> the yielded thread
> happens as part of the call_cnt mechanism in the callback(s).
>
> Given this, I wanted to know if I would be flouting any of
> assumptions, if I used
> synctask_yield and synctask_wake as opposed to their macro
> counterparts. More specifically,
> is there a chance that synctask_get() would return NULL on a thread
> which is part of a syncenv's
> thread pool?
>
> thanks,
> krish
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
>
>
More information about the Gluster-devel
mailing list