[Gluster-devel] Default quorum for 2 way replication

Shyam srangana at redhat.com
Fri Mar 4 15:06:11 UTC 2016


On 03/04/2016 07:30 AM, Pranith Kumar Karampuri wrote:
>
>
> On 03/04/2016 05:47 PM, Bipin Kunal wrote:
>> HI Pranith,
>>
>> Thanks for starting this mail thread.
>>
>> Looking from a user perspective most important is to get a "good copy"
>> of data.  I agree that people use replication for HA but having stale
>> data with HA will not have any value.
>> So I will suggest to make auto quorum as default configuration even
>> for 2-way replication.
>>
>> If user is willing to lose data at the cost of HA, he always have
>> option disable it. But default preference should be data and its
>> integrity.

I think we need to consider *maintenance* activities on the volume, like 
replacing a brick in a replica pair, or upgrading one half of the 
replica and then the other, at which time the replica group would 
function read-only, if we choose 'auto' in a 2-way replicated state, is 
this correct?

Having said the above, we already have the option in place, right? I.e 
admins can already choose 'auto', it is just the default that we are 
discussing. This could also be tackled via documentation/best practices 
("yeah right! who reads those again?" is a valid comment here).

I guess we need to be clear (in documentation or otherwise) what they 
get when they choose one over the other (like the HA point below and 
also upgrade concerns etc.), irrespective of how this discussion ends 
(just my 2 c's).

>
> That is the point. There is an illusion of choice between Data integrity
> and HA. But we are not *really* giving HA, are we? HA will be there only
> if second brick in the replica pair goes down. In your typical

@Pranith, can you elaborate on this? I am not so AFR savvy, so unable to 
comprehend why HA is available if only when the second brick goes down 
and is not when the first does. Just helps in understanding the issue at 
hand.

> deployment, we can't really give any guarantees about what brick will go
> down when. So I am not sure if we can consider it as HA. But I would
> love to hear what others have to say about this as well. If majority of
> users say they need it to be auto, you will definitely see a patch :-).
>
> Pranith
>>
>> Thanks,
>> Bipin Kunal
>>
>> On Fri, Mar 4, 2016 at 5:43 PM, Ravishankar N <ravishankar at redhat.com>
>> wrote:
>>> On 03/04/2016 05:26 PM, Pranith Kumar Karampuri wrote:
>>>> hi,
>>>>       So far default quorum for 2-way replication is 'none' (i.e.
>>>> files/directories may go into split-brain) and for 3-way replication
>>>> and
>>>> arbiter based replication it is 'auto' (files/directories won't go into
>>>> split-brain). There are requests to make default as 'auto' for 2-way
>>>> replication as well. The line of reasoning is that people value data
>>>> integrity (files not going into split-brain) more than HA (operation of
>>>> mount even when bricks go down). And admins should explicitly change
>>>> it to
>>>> 'none' when they are fine with split-brains in 2-way replication. We
>>>> were
>>>> wondering if you have any inputs about what is a sane default for 2-way
>>>> replication.
>>>>
>>>> I like the default to be 'none'. Reason: If we have 'auto' as quorum
>>>> for
>>>> 2-way replication and first brick dies, there is no HA.
>>>
>>>
>>> +1.  Quorum does not make sense when there are only 2 parties. There
>>> is no
>>> majority voting. Arbiter volumes are a better option.
>>> If someone wants some background, please see 'Client quorum' and
>>> 'Replica 2
>>> and Replica 3 volumes' section of
>>> http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
>>>
>>>
>>> -Ravi
>>>
>>>> If users are fine with it, it is better to use plain distribute volume
>>>> rather than replication with quorum as 'auto'. What are your
>>>> thoughts on the
>>>> matter? Please guide us in the right direction.
>>>>
>>>> Pranith
>>>
>>>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list