[Gluster-devel] conflicting keys for option eager-lock

Vijay Bellur vbellur at redhat.com
Wed Apr 13 04:14:59 UTC 2016


On 04/12/2016 11:53 PM, Atin Mukherjee wrote:
>
>
> On 04/13/2016 09:15 AM, Vijay Bellur wrote:
>> On 04/08/2016 10:25 PM, Vijay Bellur wrote:
>>> Hey Pranith, Ashish -
>>>
>>> We have broken support for group virt after the following commit in
>>> release-3.7:
>>>
>>> commit 46920e3bd38d9ae7c1910d0bd83eff309ab20c66
>>> Author: Ashish Pandey <aspandey at redhat.com>
>>> Date:   Fri Mar 4 13:05:09 2016 +0530
>>>
>>>       cluster/ec: Provide an option to enable/disable eager lock
>>>
>>>
>>
>> Thinking more - do we need two different options to control eager lock
>> behavior for afr and ec? cluster.eager-lock can be applicable for ec too
>> as ec and afr are normally used in a mutually exclusive manner. Are we
>> resorting to a different key for glusterd's op-version compatibility?
> I strongly recommend to use a different key as fixing them later is pain
> and we still don't have a solution to fix some of the existing options
> which are same across multiple xlators.

 From an usability perspective, having too many similar sounding options 
or too many options in general is a nightmare. Technical correctness and 
usability need to go together else the result would be chaotic. Looking 
at this very specific case, we now have two options "cluster.eager-lock" 
and "disperse.eager-lock". Several questions come to my mind when we 
look at the keys from a developer perspective:

1. What does cluster imply? Would I need to enable that to enable 
eager-lock for disperse volumes?

2. Since disperse is also a cluster translator, why do we have two 
different keys?

If we look at output of volume set help:

<output>
Option: cluster.eager-lock
Default Value: on
Description: Lock phase of a transaction has two sub-phases. First is an 
attempt to acquire locks in parallel by broadcasting non-blocking lock 
requests. If lock acquisition fails on any server, then the held locks 
are unlocked and revert to a blocking locked mode sequentially on one 
server after another.  If this option is enabled the initial 
broadcasting lock request attempt to acquire lock on the entire file. If 
this fails, we revert back to the sequential "regional" blocking lock as 
before. In the case where such an "eager" lock is granted in the 
non-blocking phase, it gives rise to an opportunity for optimization. 
i.e, if the next write transaction on the same FD arrives before the 
unlock phase of the first transaction, it "takes over" the full file 
lock. Similarly if yet another data transaction arrives before the 
unlock phase of the "optimized" transaction, that in turn "takes over" 
the lock as well. The actual unlock now happens at the end of the last 
"optimized" transaction.

Option: disperse.eager-lock
Default Value: on
Description: This option will enable/diable eager lock fordisperse volume

</output>

cluster.eager-lock nowhere conveys that it is for replicate/afr volumes, 
disperse.eager-lock's description does contain few typos.
With such descriptions confusions stemming from an user perspective 
would be even worse than what we perceive as developers .Usually when 
our behavior is not intuitive or confusing, that is when configuration 
errors happen.

If we really need to maintain two different keys for correctness, we 
need to have the option keys,description convey more appropriate and 
relevant information for users at the least.

Regards,
Vijay


More information about the Gluster-devel mailing list