[Gluster-devel] Handling locks in NSR

Shyam srangana at redhat.com
Wed Mar 2 20:59:47 UTC 2016


On 03/02/2016 03:10 AM, Avra Sengupta wrote:
> Hi,
>
> All fops in NSR, follow a specific workflow as described in this
> UML(https://docs.google.com/presentation/d/1lxwox72n6ovfOwzmdlNCZBJ5vQcCaONvZva0aLWKUqk/edit?usp=sharing).
> However all locking fops will follow a slightly different workflow as
> described below. This is a first proposed draft for handling locks, and
> we would like to hear your concerns and queries regarding the same.

This change, to handle locking FOPs differently, is due to what 
limitation/problem? (apologies if I missed an earlier thread on the same)

My understanding is that this is due to the fact that the actual FOP 
could fail/block (non-blocking/blocking) as there is an existing lock 
held, and hence just adding a journal entry and meeting quorum, is not 
sufficient for the success of the FOP (it is necessary though to handle 
lock preservation in the event of leadership change), rather acquiring 
the lock is. Is this understanding right?

Based on the above understanding of mine, and the discussion below, the 
intention seems to be to place the locking xlator below the journal. 
What if we place this xlator above the journal, but add requirements 
that FOPs handled by this xlator needs to reach the journal?

Assuming we adopt this strategy (i.e the locks xlator is above the 
journal xlator), a successful lock acquisition by the locks xlator is 
not enough to guarantee that the lock is preserved across the replica 
group, hence it has to reach the journal and as a result pass through 
other replica members journal and locks xlators as well.

If we do the above, what are the advantages and repercussions of the same?

Some of the points noted here (like conflicting non-blocking locks when 
the previous lock is not yet released) could be handled. Also in your 
scheme, what happens to blocking lock requests, the FOP will block, 
there is no async return to handle the success/failure of the same.

The downside is that on reconciliation we need to, potentially, undo 
some of the locks that are held by the locks xlator (in the new leader), 
which is outside the scope of the journal xlator.

I also assume we need to do the same for the leases xlator as well, right?

>
> 1. On receiving the lock, the leader will Journal the lock himself, and
> then try to actually acquire the lock. At this point in time, if it
> fails to acquire the lock, then it will invalidate the journal entry,
> and return a -ve ack back to the client. However, if it is successful in
> acquiring the lock, it will mark the journal entry as complete, and
> forward the fop to the followers.
>
> 2. The followers on receiving the fop, will journal it, and then try to
> actually acquire the lock. If it fails to acquire the lock, then it will
> invalidate the journal entry, and return a -ve ack back to the leader.
> If it is successful in acquiring the lock, it will mark the journal
> entry as complete,and send a +ve ack to the leader.
>
> 3. The leader on receiving all acks, will perform a quorum check. If
> quorum meets, it will send a +ve ack to the client. If the quorum fails,
> it will send a rollback to the followers.
>
> 4. The followers on receiving the rollback, will journal it first, and
> then release the acquired lock. It will update the rollback entry in the
> journal as complete and send an ack to the leader.
>
> 5. The leader on receiving the rollback acks, will journal it's own
> rollback, and then release the acquired lock. It will update the
> rollback entry in the journal, and send a -ve ack to the client.
>
> Few things to be noted in the above workflow are:
> 1. It will be a synchronous operation, across the replica volume.
> 2. Reconciliation will take care of nodes who have missed out the locks.
> 3. On a client disconnect, there will be a lock-timeout on whose
> expiration all locks held by that particular client will be released.
>
> Regards,
> Avra
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list