[Gluster-users] How to set up a 4 way gluster file system

Thing thing.thing at gmail.com
Fri Apr 27 07:22:29 UTC 2018


Hi,

I have 4 nodes, so a quorum would be 3 of 4.  The Q is I suppose why does
the documentation give this command as an example without qualifying it?

SO I am running the wrong command?   I want a "raid10"

On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:

> Hi,
>
> With replica 2 volumes one can easily end up in split-brains if there are
> frequent disconnects and high IOs going on.
> If you use replica 3 or arbiter volumes, it will guard you by using the
> quorum mechanism giving you both consistency and availability.
> But in replica 2 volumes, quorum does not make sense since it needs both
> the nodes up to guarantee consistency, which costs availability.
>
> If you can consider having a replica 3 or arbiter volumes it would be
> great. Otherwise you can anyway go ahead and continue with the replica 2
> volume
> by selecting  *y* for the warning message. It will create the replica 2
> configuration as you wanted.
>
> HTH,
> Karthik
>
> On Fri, Apr 27, 2018 at 10:56 AM, Thing <thing.thing at gmail.com> wrote:
>
>> Hi,
>>
>> I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like
>> to set these up in a raid 10 which will? give me 2TB useable.  So Mirrored
>> and concatenated?
>>
>> The command I am running is as per documents but I get a warning error,
>> how do I get this to proceed please as the documents do not say.
>>
>> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
>> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
>> glusterp4:/bricks/brick1/gv0
>> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
>> avoid this. See: http://docs.gluster.org/en/lat
>> est/Administrator%20Guide/Split%20brain%20and%20ways%20to%
>> 20deal%20with%20it/.
>> Do you still want to continue?
>>  (y/n) n
>>
>> Usage:
>> volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter
>> <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>]
>> [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]
>>
>> [root at glustep1 ~]#
>>
>> thanks
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180427/33cef241/attachment.html>


More information about the Gluster-users mailing list