[Gluster-devel] Lease-lock Design notes
srangana at redhat.com
Fri Jul 17 20:08:08 UTC 2015
I have some questions or gaps in understanding the current lease design,
request some clarification on the same.
1) Is there a notion of a *gluster* client holding a lease on a file/dir?
- As per your Barcelona gluster summit presentation the client side
caching needs this notion
- DHT/EC/AFR can leverage such a lease to prevent eager or any form of
locking when attempting to hold consistency of data being operated on
- Please note that this requires not each xlator requesting and
holding the lease, but a *gluster* client holding the lease, assuming
that one can do local in memory locking to prevent different
fd/connection/applications performing operations on the same file
against the *same* gluster client and not have to coordinate this with
I see some references to this in the design but wanted to understand if
the thought is there.
IOW, if an NFS client requests a lease, Ganesha requests the lease from
Gluster, in which case the gfapi client that requested the lease first
gets the lease, and then re-leases it to Ganesha, now Ganesha is free to
lease it to any client on its behalf and recall leases etc. as the case
may be and the gluster client does not care. When another gluster client
(due to Ganesha or otherwise (say self heal or rebalance)) attempts to
get a lease, that is when the lease is broken all across.
Is my understanding right, is the design along these lines?
2) Why is the lease not piggy backed with the open? Why 2 network FOPs
instead of 1 in this case? How can we compound this lease with other
requests? Or do you have thoughts around the same?
Isn't NFS and SMB requesting leases with its open calls
3) If there were discussions around some designs that were rejected,
could you also update the document with the same, so the one can
understand the motivation behind the current manner of implementing leases.
On 04/16/2015 07:37 AM, Soumya Koduri wrote:
> Below link contains the lease-lock design notes (as per the latest
> discussion we had). Thanks to everyone involved (CC'ed).
> Kindly go through the same and provide us your inputs.
> Gluster-devel mailing list
> Gluster-devel at gluster.org
More information about the Gluster-devel