[Gluster-devel] Should we enable features.locks-notify.contention by default ?

Ashish Pandey aspandey at redhat.com
Thu May 30 09:23:36 UTC 2019



----- Original Message -----

From: "Xavi Hernandez" <xhernandez at redhat.com> 
To: "Ashish Pandey" <aspandey at redhat.com> 
Cc: "Amar Tumballi Suryanarayan" <atumball at redhat.com>, "gluster-devel" <gluster-devel at gluster.org> 
Sent: Thursday, May 30, 2019 2:03:54 PM 
Subject: Re: [Gluster-devel] Should we enable features.locks-notify.contention by default ? 

On Thu, May 30, 2019 at 9:03 AM Ashish Pandey < aspandey at redhat.com > wrote: 





I am only concerned about in-service upgrade. 
If a feature/option is not present in V1, then I would prefer not to enable it by default on V2. 




The problem is that without enabling it, (other-)eager-lock will cause performance issues in some cases. It doesn't seem good to keep an option disabled if enabling it solves these problems. 


<blockquote>

We have seen some problem in other-eager-lock when we changed it to enable by default. 

</blockquote>


Which problems ? I think the only issue with other-eager-lock has been precisely that locks-notify-contention was disabled and a bug that needed to be solved anyway. 
I was talking about the issue when we have other-eager-lock disabled and then try to do in-service upgrade to a version where this option is ON by default. 
Although we don't have root cause of that, I was wondering if similar issue could happen in this case also. 

The difference will be that upgraded bricks will start sending upcall notifications. If clients are too old, these will simply be ignored. So I don't see any problem right now. 

Am I missing something ? 


<blockquote>


--- 
Ashish 


From: "Amar Tumballi Suryanarayan" < atumball at redhat.com > 
To: "Xavi Hernandez" < xhernandez at redhat.com > 
Cc: "gluster-devel" < gluster-devel at gluster.org > 
Sent: Thursday, May 30, 2019 12:04:43 PM 
Subject: Re: [Gluster-devel] Should we enable features.locks-notify.contention by default ? 



On Thu, May 30, 2019 at 11:34 AM Xavi Hernandez < xhernandez at redhat.com > wrote: 

<blockquote>

Hi all, 

a patch [1] was added some time ago to send upcall notifications from the locks xlator to the current owner of a granted lock when another client tries to acquire the same lock (inodelk or entrylk). This makes it possible to use eager-locking on the client side, which improves performance significantly, while also keeping good performance when multiple clients are accessing the same files (the current owner of the lock receives the notification and releases it as soon as possible, allowing the other client to acquire it and proceed very soon). 

Currently both AFR and EC are ready to handle these contention notifications and both use eager-locking. However the upcall contention notification is disabled by default. 

I think we should enabled it by default. Does anyone see any possible issue if we do that ? 


</blockquote>


If it helps performance, we should ideally do it. 

But, considering we are days away from glusterfs-7.0 branching, should we do it now, or wait for branch out, and make it default for next version? (so that it gets time for testing). Considering it is about consistency I would like to hear everyone's opinion here. 

Regards, 
Amar 



<blockquote>


Regards, 

Xavi 

[1] https://review.gluster.org/c/glusterfs/+/14736 
_______________________________________________ 


</blockquote>


-- 
Amar Tumballi (amarts) 

_______________________________________________ 

Community Meeting Calendar: 

APAC Schedule - 
Every 2nd and 4th Tuesday at 11:30 AM IST 
Bridge: https://bluejeans.com/836554017 

NA/EMEA Schedule - 
Every 1st and 3rd Tuesday at 01:00 PM EDT 
Bridge: https://bluejeans.com/486278655 

Gluster-devel mailing list 
Gluster-devel at gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-devel 



</blockquote>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190530/8c55aebb/attachment-0001.html>


More information about the Gluster-devel mailing list