[Gluster-users] Quorum in distributed-replicate volume

Karthik Subrahmanya ksubrahm at redhat.com
Tue Feb 27 12:20:49 UTC 2018


On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote:

> On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > > Since arbiter bricks need not be of same size as the data bricks, if
> you
> > > > can configure three more arbiter bricks
> > > > based on the guidelines in the doc [1], you can do it live and you
> will
> > > > have the distribution count also unchanged.
> > >
> > > I can probably find one or more machines with a few hundred GB free
> > > which could be allocated for arbiter bricks if it would be sigificantly
> > > simpler and safer than repurposing the existing bricks (and I'm getting
> > > the impression that it probably would be).
> >
> > Yes it is the simpler and safer way of doing that.
> >
> > >   Does it particularly matter
> > > whether the arbiters are all on the same node or on three separate
> > > nodes?
> > >
> >  No it doesn't matter as long as the bricks of same replica subvol are
> not
> > on the same nodes.
>
> OK, great.  So basically just install the gluster server on the new
> node(s), do a peer probe to add them to the cluster, and then
>
> gluster volume create palantir replica 3 arbiter 1 [saruman brick]
> [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter
> 2] [cthulhu brick] [mordiggian brick] [arbiter 3]
>
gluster volume add-brick <volname> replica 3 arbiter 1 <arbiter 1> <arbiter
2> <arbiter 3>
is the command. It will convert the existing volume to arbiter volume and
add the specified bricks as arbiter bricks to the existing subvols.
Once they are successfully added, self heal should start automatically and
you can check the status of heal using the command,
gluster volume heal <volname> info

Regards,
Karthik
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180227/bdb73bdb/attachment.html>


More information about the Gluster-users mailing list