[Gluster-devel] [features/locks] Fetching lock info in lookup
Raghavendra Gowdappa
rgowdapp at redhat.com
Thu Jun 21 01:44:53 UTC 2018
On Thu, Jun 21, 2018 at 6:55 AM, Raghavendra Gowdappa <rgowdapp at redhat.com>
wrote:
>
>
> On Wed, Jun 20, 2018 at 9:09 PM, Xavi Hernandez <xhernandez at redhat.com>
> wrote:
>
>> On Wed, Jun 20, 2018 at 4:29 PM Raghavendra Gowdappa <rgowdapp at redhat.com>
>> wrote:
>>
>>> Krutika,
>>>
>>> This patch doesn't seem to be getting counts per domain, like number of
>>> inodelks or entrylks acquired in a domain "xyz". Am I right? If per domain
>>> stats are not available, passing interested domains in xdata_req would be
>>> needed. Any suggestions on that?
>>>
>>
>> We have GLUSTERFS_INODELK_DOM_COUNT. Its data should be a domain name for
>> which we want to know the number of inodelks (the count is returned into
>> GLUSTERFS_INODELK_COUNT though).
>>
>> It only exists for inodelk. If you need it for entrylk, it would need to
>> be implemented.
>>
>
> Yes. Realised that after going through the patch a bit more deeply.
> Thanks. I'll implement a domain based entrylk count.
>
I think I need to have a dynamic key for responses. Otherwise its difficult
to support requests on multiple domain in the same call. Embedding the
domain name in key helps us to keep per domain results separate. Also
needed is ways to send multiple domains in requests. If EC/AFR is already
using it, there is high chance of overwriting previously set requests for
different domains. Currently this is not consumed in lookup path by
EC/AFR/Shard (DHT is interested for this information in lookup path) and
hence not a pressing problem. But, we cannot rely on that.
what do you think is a better interface among following alternatives?
In request path,
1. Separate keys with domain name embedded - For eg.,
glusterfs.inodelk.xyz.count. Value is ignored.
2. A single key like GLUSTERFS_INODELK_DOM_COUNT. Value is a string of
interested domains separated by a delimiter (which character to use as
delimiter?)
In response path,
1. Separate keys with domain name embedded - For eg.,
glusterfs.inodelk.xyz.count. Value is the total number of locks (granted +
blocked).
2. A single key like GLUSTERFS_INODELK_DOM_COUNT. Value is a string of
interested domains and lock count separated by a delimiter (which character
to use as delimiter?)
I personally prefer the domain name embedded in key approach as it avoids
the string parsing by consumers. Any other approaches you can think of?
As of now response returned is number of (granted + blocked) locks. For
consumers using write-locks granted locks is always 1 and hence blocked
locks can be inferred. But read-locks consumers this is not possible as
there can be more than one read-lock consumers. For the requirement in DHT,
we don't need the exact number. Instead we need the information about are
there any granted locks, which can be given by the existing implementation.
So, I am not changing that.
>
>> Xavi
>>
>>
>>> regards,
>>> Raghavendra
>>>
>>> On Wed, Jun 20, 2018 at 12:58 PM, Raghavendra Gowdappa <
>>> rgowdapp at redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Jun 20, 2018 at 12:06 PM, Krutika Dhananjay <
>>>> kdhananj at redhat.com> wrote:
>>>>
>>>>> We do already have a way to get inodelk and entrylk count from a bunch
>>>>> of fops, introduced in http://review.gluster.org/10880.
>>>>> Can you check if you can make use of this feature?
>>>>>
>>>>
>>>> Thanks Krutika. Yes, this feature meets DHT's requirement. We might
>>>> need a GLUSTERFS_PARENT_INODELK, but that can be easily added along the
>>>> lines of other counts. If necessary I'll send a patch to implement
>>>> GLUSTERFS_PARENT_INODELK.
>>>>
>>>>
>>>>> -Krutika
>>>>>
>>>>>
>>>>> On Wed, Jun 20, 2018 at 9:17 AM, Amar Tumballi <atumball at redhat.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Jun 20, 2018 at 9:06 AM, Raghavendra Gowdappa <
>>>>>> rgowdapp at redhat.com> wrote:
>>>>>>
>>>>>>> All,
>>>>>>>
>>>>>>> We've a requirement in DHT [1] to query the number of locks granted
>>>>>>> on an inode in lookup fop. I am planning to use xdata_req in lookup to pass
>>>>>>> down the relevant arguments for this query. I am proposing following
>>>>>>> signature:
>>>>>>>
>>>>>>> In lookup request path following key value pairs will be passed in
>>>>>>> xdata_req:
>>>>>>> * "glusterfs.lock.type"
>>>>>>> - values can be "glusterfs.posix", "glusterfs.inodelk",
>>>>>>> "glusterfs.entrylk"
>>>>>>> * If the value of "glusterfs.lock.type" is "glusterfs.entrylk", then
>>>>>>> basename is passed as a value in xdata_req for key
>>>>>>> "glusterfs.entrylk.basename"
>>>>>>> * key "glusterfs.lock-on?" will differentiate whether the lock
>>>>>>> information is on current inode ("glusterfs.current-inode") or parent-inode
>>>>>>> ("glusterfs.parent-inode"). For a nameless lookup "glusterfs.parent-inode"
>>>>>>> is invalid.
>>>>>>> * "glusterfs.blocked-locks" - Information should be limited to
>>>>>>> blocked locks.
>>>>>>> * "glusterfs.granted-locks" - Information should be limited to
>>>>>>> granted locks.
>>>>>>> * If necessary other information about granted locks, blocked locks
>>>>>>> can be added. Since, there is no requirement for now, I am not adding these
>>>>>>> keys.
>>>>>>>
>>>>>>> Response dictionary will have information in following format:
>>>>>>> * "glusterfs.entrylk.<gfid>.<basename>.granted-locks" - number of
>>>>>>> granted entrylks on inode "gfid" with "basename" (usually this value will
>>>>>>> be either 0 or 1 unless we introduce read/write lock semantics).
>>>>>>> * "glusterfs.inodelk.<gfid>.granted-locks" - number of granted
>>>>>>> inodelks on "basename"
>>>>>>>
>>>>>>> Thoughts?
>>>>>>>
>>>>>>>
>>>>>> I personally feel, it is good to get as much information possible in
>>>>>> lookup, so it helps to take some high level decisions better, in all
>>>>>> translators. So, very broad answer would be to say go for it. The main
>>>>>> reason the xdata is provided in all fops is to do these extra information
>>>>>> fetching/overloading anyways.
>>>>>>
>>>>>> As you have clearly documented the need, it makes it even better to
>>>>>> review and document it with commit. So, all for it.
>>>>>>
>>>>>> Regards,
>>>>>> Amar
>>>>>>
>>>>>>
>>>>>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1581306#c28
>>>>>>>
>>>>>>>
>>>>>> _______________________________________________
>>>>>> Gluster-devel mailing list
>>>>>> Gluster-devel at gluster.org
>>>>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180621/e43c9bcb/attachment.html>
More information about the Gluster-devel
mailing list