[Gluster-users] Arbitier doesn't create

Atin Mukherjee atin.mukherjee83 at gmail.com
Wed Mar 23 13:52:50 UTC 2016


Just a wild guess is it a fresh install or an upgrade? If the later, have
you bumped up the op-version?

-Atin
Sent from one plus one
On 23-Mar-2016 7:04 pm, "Ralf Simon" <simon at denic.de> wrote:

> Hello,
>
> I've installed ....
>
> # yum info glusterfs-server
> Loaded plugins: fastestmirror
> Loading mirror speeds from cached hostfile
> Installed Packages
> Name        : glusterfs-server
> Arch        : x86_64
> Version     : 3.7.6
> Release     : 1.el7
> Size        : 4.3 M
> Repo        : installed
> From repo   : latest
> Summary     : Clustered file-system server
> URL         : http://www.gluster.org/docs/index.php/GlusterFS
> License     : GPLv2 or LGPLv3+
> Description : GlusterFS is a distributed file-system capable of scaling to
> several
>             : petabytes. It aggregates various storage bricks over
> Infiniband RDMA
>             : or TCP/IP interconnect into one large parallel network file
>             : system. GlusterFS is one of the most sophisticated file
> systems in
>             : terms of features and extensibility.  It borrows a powerful
> concept
>             : called Translators from GNU Hurd kernel. Much of the code in
> GlusterFS
>             : is in user space and easily manageable.
>             :
>             : This package provides the glusterfs server daemon.
>
> I wanted to build a ...
>
> # gluster volume create gv0 replica 3 arbiter 1 d90029:/data/brick0
> d90031:/data/brick0 d90034:/data/brick0
> volume create: gv0: success: please start the volume to access data
>
> ... but I got a ...
>
> # gluster volume info
>
> Volume Name: gv0
> Type: Replicate
> Volume ID: 329325fc-ceed-4dee-926f-038f44281678
> Status: Created
> Number of Bricks: *1 x 3 = 3*
> Transport-type: tcp
> Bricks:
> Brick1: d90029:/data/brick0
> Brick2: d90031:/data/brick0
> Brick3: d90034:/data/brick0
> Options Reconfigured:
> performance.readdir-ahead: on
>
> ... without the requested arbiter !
>
> The same situation with 6 bricks ...
>
> # gluster volume create gv0 replica 3 arbiter 1 d90029:/data/brick0
> d90031:/data/brick0 d90034:/data/brick0 d90029:/data/brick1
> d90031:/data/brick1 d90034:/data/brick1
> volume create: gv0: success: please start the volume to access data
> [root at d90029 ~]# gluster vol info
>
> Volume Name: gv0
> Type: Distributed-Replicate
> Volume ID: 2b8dbcc0-c4bb-41e3-a870-e164d8d10c49
> Status: Created
> Number of Bricks: *2 x 3 = 6*
> Transport-type: tcp
> Bricks:
> Brick1: d90029:/data/brick0
> Brick2: d90031:/data/brick0
> Brick3: d90034:/data/brick0
> Brick4: d90029:/data/brick1
> Brick5: d90031:/data/brick1
> Brick6: d90034:/data/brick1
> Options Reconfigured:
> performance.readdir-ahead: on
>
>
> In contrast the documentation tells ....
>
>
> *Arbiter configuration*
>
> The arbiter configuration a.k.a. the arbiter volume is the perfect sweet
> spot between a 2-way replica and 3-way replica to avoid files getting into
> split-brain, *without the 3x storage space* as mentioned earlier. The
> syntax for creating the volume is:
>
> *gluster volume create replica 3 arbiter 1 host1:brick1 host2:brick2
> host3:brick3*
>
> For example:
>
> *gluster volume create testvol replica 3 arbiter 1
> 127.0.0.2:/bricks/brick{1..6} force*
>
> volume create: testvol: success: please start the volume to access data
>
> *gluster volume info*
>
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: ae6c4162-38c2-4368-ae5d-6bad141a4119
> Status: Created
> Number of Bricks: *2 x (2 + 1) = 6*
> Transport-type: tcp
> Bricks:
> Brick1: 127.0.0.2:/bricks/brick1
> Brick2: 127.0.0.2:/bricks/brick2
> Brick3: 127.0.0.2:/bricks/brick3 *(arbiter)*
> Brick4: 127.0.0.2:/bricks/brick4
> Brick5: 127.0.0.2:/bricks/brick5
> Brick6: 127.0.0.2:/bricks/brick6 *(arbiter)*
> Options Reconfigured : transport.address-family: inet
> performance.readdir-ahead: on `
>
>
>
> What's going wrong ? Can anybody help ?
>
> Kind Regards
> Ralf Simon
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160323/cadfa75e/attachment.html>


More information about the Gluster-users mailing list