[Gluster-devel] Lease-lock Design notes
Soumya Koduri
skoduri at redhat.com
Mon Aug 24 06:58:04 UTC 2015
Hi,
We have captured the lease design notes in the gluster-specs repository.
URL: http://review.gluster.org/#/c/11980/
In addition, we have even updated the summary of the approaches proposed
based on the recent discussions we had to address some of the open
issues like Lease state migration & healing etc.
Request your inputs/comments.
Thanks,
Soumya
Poornima
On 07/22/2015 09:22 PM, Soumya Koduri wrote:
>
>
> On 07/22/2015 06:33 PM, Shyam wrote:
>> Thanks for the responses. <some comments inline>
>>
>> Who is doing/attempting client side caching improvements for Gluster 4.0
>> (or before that)? Just asking, getting their opinion on this framework
>> would be helpful and possibly prevent any future *major* upheavals of
>> the same *hopefully*.
>>
> Agree. Xavier and Jeff had some ideas on client-side caching -
> http://www.gluster.org/community/documentation/index.php/Features/caching.
> We had a chat with Xavi regarding the design of leases which can help
> their effort. I am not sure if anyone is actively working on it right
> now. But once we have leases.md (capturing latest design changes) ready,
> we planned to take inputs from him and the community.
>
>
>> On 07/21/2015 09:20 AM, Soumya Koduri wrote:
>>>
>>>
>>> On 07/21/2015 02:49 PM, Poornima Gurusiddaiah wrote:
>>>> Hi Shyam,
>>>>
>>>> Please find my reply inline.
>>>>
>>>> Rgards,
>>>> Poornima
>>>>
>>>> ----- Original Message -----
>>>>> From: "Ira Cooper" <icooper at redhat.com>
>>>>> To: "Shyam" <srangana at redhat.com>
>>>>> Cc: "Gluster Devel" <gluster-devel at gluster.org>
>>>>> Sent: Saturday, July 18, 2015 4:09:30 AM
>>>>> Subject: Re: [Gluster-devel] Lease-lock Design notes
>>>>>
>>>>> 1. Yes, it is intentional. The internals of gluster should be able to
>>>>> lease-lks. We discussed using them in the read ahead translator and
>>>>> the
>>>>> write behind translator.
>>>>> 2. This has been discussed, and proposed, but there is actually a
>>>>> need for a
>>>>> lease fop also, because clients can request the "renewal" or
>>>>> "reinstatement"
>>>>> of a lease. (Actually for Samba having it all be one call and
>>>>> atomic is
>>>>> very interesting.)
>>>>> 3. This I can't answer... I haven't been in the most recent
>>>>> discussions. But
>>>>> the intent of this work, when I started, was to be useful to the
>>>>> whole of
>>>>> Gluster, not just Samba or Ganesha.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> -Ira
>>>>>
>>>>> ----- Original Message -----
>>>>>> Hi,
>>>>>>
>>>>>> I have some questions or gaps in understanding the current lease
>>>>>> design,
>>>>>> request some clarification on the same.
>>>>>>
>>>>>> 1) Is there a notion of a *gluster* client holding a lease on a
>>>>>> file/dir?
>>>> Yes, a lease is granted to a gluster client on a file, dir is not yet
>>>> implemented, but is on cards.
>>>> The lease can be requestd from any xlator, and is granted for the
>>>> whole client.
>>
>> (sort of shooting from the hip here) who holds the lease on the gluster
>> client? When an xlator wants a lease the FOP is sent to its subvol or
>> from the top?
> Currently its only the applications (SMB/NFS-Ganesha) which holds the
> lease and gfapi tracks only those leases which need to be recalled (as
> part of upcall infrastructure).
> We may also track these leases in the inode_ctx in gfapi xlator to
> detect and prevent duplicate recall notifications.
>
>>
>> Maybe some pointers within the patches posted that handle this notion
>> would help me process this better (or maybe I should go through the
>> entire set anyway :) )
>>
> The lease patches posted earlier are getting revamped. Few of them are
> posted below -
> http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:leases
>
>
>>>>
>>>>>> - As per your Barcelona gluster summit presentation the client side
>>>>>> caching needs this notion
>>>>>> - DHT/EC/AFR can leverage such a lease to prevent eager or any
>>>>>> form of
>>>>>> locking when attempting to hold consistency of data being operated on
>>>>>> - Please note that this requires not each xlator requesting and
>>>>>> holding the lease, but a *gluster* client holding the lease, assuming
>>>>>> that one can do local in memory locking to prevent different
>>>>>> fd/connection/applications performing operations on the same file
>>>>>> against the *same* gluster client and not have to coordinate this
>>>>>> with
>>>>>> the brick
>>> Apart from the in memory locking (for different fds) which you have
>>> mentioned, the other complexity involved here (which I can think of atm)
>>> is that unlike existing inodelk/entry_lk locks, leases can be recalled
>>> and revoked by server. We need to consider the amount of time needed by
>>> each of these xlators to finish their tasks(may include re-acquiring
>>> locks) before they send recall_lease event to their parent xlators. Or
>>> (as already mentioned in the NFS protocol), we need a way to let server
>>> increase the recall_lease timeout dynamically if the client is
>>> diligently flushing the data which I think is doable. But the switch
>>> between leases<->locks sounds racy :)
>>
>> Well I would say this is *almost* no different than when a lease to a
>> client is broken. Additionally here is where we can possibly think of
>> compounding the lock requests across xlators (and other such lease
>> breaking requirements, IOW *compound* actions/verbs again).
>>
> Seems so. As mentioned above, with the current design only the
> applications maintain and handle the leases. We haven't explored much on
> other xlators making using of these locks.
>
>>>
>>> That said as Poornima has mentioned below, currently we have started
>>> with requesting & handling leases at application layer. Later we shall
>>> explore more on having a client-side xlator to handle leases and cache
>>> data.
>>>
>>> Thanks,
>>> Soumya
>>>>>>
>>>>>> I see some references to this in the design but wanted to
>>>>>> understand if
>>>>>> the thought is there.
>>>> The initial thought of having leases in Gluster was to support
>>>> Multiprotocol access,
>>>> aprt from this another use case we saw was, having a Gluster client
>>>> cache xlator
>>>> which caches taking leases. Yes, DHT/EC/AFR also could leverage
>>>> leases, but i m not
>>>> sure if leases can replace eager locks.
>>
>> I am not sure as well, but from a birds eye view, a lease to a client
>> means no other client/process is accessing the data, so by extension we
>> do not need locks on the brick etc. Maybe others can chime in on the
>> possibilities here.
>>
> This seems can be done. Once we post our leases.md doc, shall we have a
> discussion on #gluster channel to discuss exclusively on how it can be
> improved to accommodate these extensions in future?
>
>>>>
>>>>>>
>>>>>> IOW, if an NFS client requests a lease, Ganesha requests the lease
>>>>>> from
>>>>>> Gluster, in which case the gfapi client that requested the lease
>>>>>> first
>>>>>> gets the lease, and then re-leases it to Ganesha, now Ganesha is
>>>>>> free to
>>>>>> lease it to any client on its behalf and recall leases etc. as the
>>>>>> case
>>>>>> may be and the gluster client does not care. When another gluster
>>>>>> client
>>>>>> (due to Ganesha or otherwise (say self heal or rebalance))
>>>>>> attempts to
>>>>>> get a lease, that is when the lease is broken all across.
>>>>>>
>>>
>>>>>> Is my understanding right, is the design along these lines?
>>>> Yes, that is right, any conflicts between its(ganesha/Samba) clients
>>>> should be
>>>> resolved by the Ganesha/Samba server. Gluster server will handle the
>>>> conflicts
>>>> across gluster clients.
>>>>
>>>>>>
>>>>>> 2) Why is the lease not piggy backed with the open? Why 2 network
>>>>>> FOPs
>>>>>> instead of 1 in this case? How can we compound this lease with other
>>>>>> requests? Or do you have thoughts around the same?
>>>>>>
>>>>>> Isn't NFS and SMB requesting leases with its open calls
>>>> Our initial thought was to overload lk call to request for lease, and
>>>> also
>>>> support open+lease. But the problems with fd based leases were:
>>>> - Doesn't go well with the NFS world, where handles are created and
>>>> delegations are granted followed by multiple open/read/write/close
>>>> on that fd. Hence lease is more conveniently associated with
>>>> handles
>>>> than fds, in NFS.
>>>> - Anonymous fds are used by NFSV3 and pnfs to maintain statelessness,
>>>> aonymous fds means there is no open fd on the file, the backend
>>>> opens
>>>> read/writes to the file and closes. These anonfds makes it harder
>>>> to get
>>>> the lease conflict check right, and we may break leases when it is
>>>> not
>>>> necessary.
>>>> - The lease might have to live longer than fd (handle based leases,
>>>> i.e.
>>>> when persistent/durable handles will be used instead of fd).
>>>> That said, we could even now overload open call to request for lease,
>>>> but it will be associated with the inode and the client and not the fd,
>>>> for the above said reasons.
>>
>> Ok, my question was to understand why a FOP, I think I go that :)
>>
>> I guess later as we think compound FOPS (I am beginning to like the SMB2
>> model of the same), we can anyway compound this action.
>>
> The reason we have two fops is for applications compatibility (at-least
> NFS-Ganesha) which does open followed by lease_lock request after
> determining that lease can be granted to the client based on few
> heuristics.
>
> Thanks,
> Soumya
>
>>>>
>>>>>>
>>>>>> 3) If there were discussions around some designs that were rejected,
>>>>>> could you also update the document with the same, so the one can
>>>>>> understand the motivation behind the current manner of implementing
>>>>>> leases.
>>>> Yes sure, we are updating the leases.md, will send it at the earliest.
>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Shyam
>>>>>>
>>>>>> On 04/16/2015 07:37 AM, Soumya Koduri wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Below link contains the lease-lock design notes (as per the latest
>>>>>>> discussion we had). Thanks to everyone involved (CC'ed).
>>>>>>>
>>>>>>> http://www.gluster.org/community/documentation/index.php/Features/Upcall-infrastructure#delegations.2Flease-locks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Kindly go through the same and provide us your inputs.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Soumya
>>>>>>> _______________________________________________
>>>>>>> Gluster-devel mailing list
>>>>>>> Gluster-devel at gluster.org
>>>>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>>>> _______________________________________________
>>>>>> Gluster-devel mailing list
>>>>>> Gluster-devel at gluster.org
>>>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>>>>
>>>>> _______________________________________________
>>>>> Gluster-devel mailing list
>>>>> Gluster-devel at gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>>>
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> Gluster-devel at gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
More information about the Gluster-devel
mailing list