[Gluster-devel] Should we enable contention notification by default ?

Xavi Hernandez xhernandez at redhat.com
Thu May 2 13:44:40 UTC 2019


On Thu, 2 May 2019, 15:37 Milind Changire, <mchangir at redhat.com> wrote:

> On Thu, May 2, 2019 at 6:44 PM Xavi Hernandez <xhernandez at redhat.com>
> wrote:
>
>> Hi Ashish,
>>
>> On Thu, May 2, 2019 at 2:17 PM Ashish Pandey <aspandey at redhat.com> wrote:
>>
>>> Xavi,
>>>
>>> I would like to keep this option (features.lock-notify-contention)
>>> enabled by default.
>>> However, I can see that there is one more option which will impact the
>>> working of this option which is "notify-contention-delay"
>>>
>>
> Just a nit. I wish the option was called "notify-contention-interval"
> The "delay" part doesn't really emphasize where the delay would be put in.
>

It makes sense. Maybe we can also rename it or add a second name (alias).
If there are no objections, I will send a patch with the change.

Xavi


>
>>      .description = "This value determines the minimum amount of time "
>>>                     "(in seconds) between upcall contention
>>> notifications "
>>>                     "on the same inode. If multiple lock requests are "
>>>                     "received during this period, only one upcall will "
>>>                     "be sent."},
>>>
>>> I am not sure what should be the best value for this option if we want
>>> to keep features.lock-notify-contention ON by default?
>>> It looks like if we keep the value of notify-contention-delay more, say
>>> 5 sec, it will wait for this much time to send up call
>>> notification which does not look good.
>>>
>>
>> No, the first notification is sent immediately. What this option does is
>> to define the minimum interval between notifications. This interval is per
>> lock. This is done to avoid storms of notifications if many requests come
>> referencing the same lock.
>>
>> Is my understanding correct?
>>> What will be impact of this value and what should be the default value
>>> of this option?
>>>
>>
>> I think the current default value of 5 seconds seems good enough. If
>> there are many bricks, each brick could send a notification per lock. 1000
>> bricks would mean a client would receive 1000 notifications every 5
>> seconds. It doesn't seem too much, but in those cases 10, and considering
>> we could have other locks, maybe a higher value could be better.
>>
>> Xavi
>>
>>
>>>
>>> ---
>>> Ashish
>>>
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------
>>> *From: *"Xavi Hernandez" <xhernandez at redhat.com>
>>> *To: *"gluster-devel" <gluster-devel at gluster.org>
>>> *Cc: *"Pranith Kumar Karampuri" <pkarampu at redhat.com>, "Ashish Pandey" <
>>> aspandey at redhat.com>, "Amar Tumballi" <atumball at redhat.com>
>>> *Sent: *Thursday, May 2, 2019 4:15:38 PM
>>> *Subject: *Should we enable contention notification by default ?
>>>
>>> Hi all,
>>>
>>> there's a feature in the locks xlator that sends a notification to
>>> current owner of a lock when another client tries to acquire the same lock.
>>> This way the current owner is made aware of the contention and can release
>>> the lock as soon as possible to allow the other client to proceed.
>>>
>>> This is specially useful when eager-locking is used and multiple clients
>>> access the same files and directories. Currently both replicated and
>>> dispersed volumes use eager-locking and can use contention notification to
>>> force an early release of the lock.
>>>
>>> Eager-locking reduces the number of network requests required for each
>>> operation, improving performance, but could add delays to other clients
>>> while it keeps the inode or entry locked. With the contention notification
>>> feature we avoid this delay, so we get the best performance with minimal
>>> issues in multiclient environments.
>>>
>>> Currently the contention notification feature is controlled by the
>>> 'features.lock-notify-contention' option and it's disabled by default.
>>> Should we enable it by default ?
>>>
>>> I don't see any reason to keep it disabled by default. Does anyone
>>> foresee any problem ?
>>>
>>> Regards,
>>>
>>> Xavi
>>>
>>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> --
> Milind
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190502/ac41d3a0/attachment.html>


More information about the Gluster-devel mailing list