[Gluster-devel] [features/locks] Fetching lock info in lookup
Raghavendra Gowdappa
rgowdapp at redhat.com
Thu Jun 21 01:25:41 UTC 2018
On Wed, Jun 20, 2018 at 9:09 PM, Xavi Hernandez <xhernandez at redhat.com>
wrote:
> On Wed, Jun 20, 2018 at 4:29 PM Raghavendra Gowdappa <rgowdapp at redhat.com>
> wrote:
>
>> Krutika,
>>
>> This patch doesn't seem to be getting counts per domain, like number of
>> inodelks or entrylks acquired in a domain "xyz". Am I right? If per domain
>> stats are not available, passing interested domains in xdata_req would be
>> needed. Any suggestions on that?
>>
>
> We have GLUSTERFS_INODELK_DOM_COUNT. Its data should be a domain name for
> which we want to know the number of inodelks (the count is returned into
> GLUSTERFS_INODELK_COUNT though).
>
> It only exists for inodelk. If you need it for entrylk, it would need to
> be implemented.
>
Yes. Realised that after going through the patch a bit more deeply. Thanks.
I'll implement a domain based entrylk count.
> Xavi
>
>
>> regards,
>> Raghavendra
>>
>> On Wed, Jun 20, 2018 at 12:58 PM, Raghavendra Gowdappa <
>> rgowdapp at redhat.com> wrote:
>>
>>>
>>>
>>> On Wed, Jun 20, 2018 at 12:06 PM, Krutika Dhananjay <kdhananj at redhat.com
>>> > wrote:
>>>
>>>> We do already have a way to get inodelk and entrylk count from a bunch
>>>> of fops, introduced in http://review.gluster.org/10880.
>>>> Can you check if you can make use of this feature?
>>>>
>>>
>>> Thanks Krutika. Yes, this feature meets DHT's requirement. We might need
>>> a GLUSTERFS_PARENT_INODELK, but that can be easily added along the lines of
>>> other counts. If necessary I'll send a patch to implement
>>> GLUSTERFS_PARENT_INODELK.
>>>
>>>
>>>> -Krutika
>>>>
>>>>
>>>> On Wed, Jun 20, 2018 at 9:17 AM, Amar Tumballi <atumball at redhat.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Jun 20, 2018 at 9:06 AM, Raghavendra Gowdappa <
>>>>> rgowdapp at redhat.com> wrote:
>>>>>
>>>>>> All,
>>>>>>
>>>>>> We've a requirement in DHT [1] to query the number of locks granted
>>>>>> on an inode in lookup fop. I am planning to use xdata_req in lookup to pass
>>>>>> down the relevant arguments for this query. I am proposing following
>>>>>> signature:
>>>>>>
>>>>>> In lookup request path following key value pairs will be passed in
>>>>>> xdata_req:
>>>>>> * "glusterfs.lock.type"
>>>>>> - values can be "glusterfs.posix", "glusterfs.inodelk",
>>>>>> "glusterfs.entrylk"
>>>>>> * If the value of "glusterfs.lock.type" is "glusterfs.entrylk", then
>>>>>> basename is passed as a value in xdata_req for key
>>>>>> "glusterfs.entrylk.basename"
>>>>>> * key "glusterfs.lock-on?" will differentiate whether the lock
>>>>>> information is on current inode ("glusterfs.current-inode") or parent-inode
>>>>>> ("glusterfs.parent-inode"). For a nameless lookup "glusterfs.parent-inode"
>>>>>> is invalid.
>>>>>> * "glusterfs.blocked-locks" - Information should be limited to
>>>>>> blocked locks.
>>>>>> * "glusterfs.granted-locks" - Information should be limited to
>>>>>> granted locks.
>>>>>> * If necessary other information about granted locks, blocked locks
>>>>>> can be added. Since, there is no requirement for now, I am not adding these
>>>>>> keys.
>>>>>>
>>>>>> Response dictionary will have information in following format:
>>>>>> * "glusterfs.entrylk.<gfid>.<basename>.granted-locks" - number of
>>>>>> granted entrylks on inode "gfid" with "basename" (usually this value will
>>>>>> be either 0 or 1 unless we introduce read/write lock semantics).
>>>>>> * "glusterfs.inodelk.<gfid>.granted-locks" - number of granted
>>>>>> inodelks on "basename"
>>>>>>
>>>>>> Thoughts?
>>>>>>
>>>>>>
>>>>> I personally feel, it is good to get as much information possible in
>>>>> lookup, so it helps to take some high level decisions better, in all
>>>>> translators. So, very broad answer would be to say go for it. The main
>>>>> reason the xdata is provided in all fops is to do these extra information
>>>>> fetching/overloading anyways.
>>>>>
>>>>> As you have clearly documented the need, it makes it even better to
>>>>> review and document it with commit. So, all for it.
>>>>>
>>>>> Regards,
>>>>> Amar
>>>>>
>>>>>
>>>>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1581306#c28
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> Gluster-devel mailing list
>>>>> Gluster-devel at gluster.org
>>>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>>
>>>>
>>>>
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180621/1fc64b82/attachment-0001.html>
More information about the Gluster-devel
mailing list