[Gluster-users] How to set up a 4 way gluster file system

Sunil Kumar Heggodu Gopala Acharya sheggodu at redhat.com
Fri Apr 27 08:06:49 UTC 2018


Hi,

>>gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
glusterp4:/bricks/brick1/gv0

This command will create a distributed-replicate volume(yes you have to opt
'y' at the warning message to get it created). We will have two
distribution legs each containing a replica pair(made of two bricks). When
a file is placed on the volume by a user, it will be placed in one of the
distribution legs which as mentioned earlier will have only two copies.
With replica 2 volumes(volume type: replicate /distributed-replicate) we
might hit split-brain situation. So we recommend replica 3 or arbiter
volume to provide consistency and availability.

Regards,

Sunil kumar Acharya

Senior Software Engineer

Red Hat

<https://www.redhat.com>

T: +91-8067935170 <http://redhatemailsignature-marketing.itos.redhat.com/>

<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>


On Fri, Apr 27, 2018 at 12:52 PM, Thing <thing.thing at gmail.com> wrote:

> Hi,
>
> I have 4 nodes, so a quorum would be 3 of 4.  The Q is I suppose why does
> the documentation give this command as an example without qualifying it?
>
> SO I am running the wrong command?   I want a "raid10"
>
> On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com>
> wrote:
>
>> Hi,
>>
>> With replica 2 volumes one can easily end up in split-brains if there are
>> frequent disconnects and high IOs going on.
>> If you use replica 3 or arbiter volumes, it will guard you by using the
>> quorum mechanism giving you both consistency and availability.
>> But in replica 2 volumes, quorum does not make sense since it needs both
>> the nodes up to guarantee consistency, which costs availability.
>>
>> If you can consider having a replica 3 or arbiter volumes it would be
>> great. Otherwise you can anyway go ahead and continue with the replica 2
>> volume
>> by selecting  *y* for the warning message. It will create the replica 2
>> configuration as you wanted.
>>
>> HTH,
>> Karthik
>>
>> On Fri, Apr 27, 2018 at 10:56 AM, Thing <thing.thing at gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like
>>> to set these up in a raid 10 which will? give me 2TB useable.  So Mirrored
>>> and concatenated?
>>>
>>> The command I am running is as per documents but I get a warning error,
>>> how do I get this to proceed please as the documents do not say.
>>>
>>> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
>>> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
>>> glusterp4:/bricks/brick1/gv0
>>> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
>>> avoid this. See: http://docs.gluster.org/en/lat
>>> est/Administrator%20Guide/Split%20brain%20and%20ways%20to%20
>>> deal%20with%20it/.
>>> Do you still want to continue?
>>>  (y/n) n
>>>
>>> Usage:
>>> volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter
>>> <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>]
>>> [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]
>>>
>>> [root at glustep1 ~]#
>>>
>>> thanks
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180427/13477512/attachment.html>


More information about the Gluster-users mailing list