[Gluster-users] Quorum in distributed-replicate volume

Karthik Subrahmanya ksubrahm at redhat.com
Tue Feb 27 11:29:36 UTC 2018


On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote:

> On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> > If you want to use the first two bricks as arbiter, then you need to be
> > aware of the following things:
> > - Your distribution count will be decreased to 2.
>
> What's the significance of this?  I'm trying to find documentation on
> distribution counts in gluster, but my google-fu is failing me.
>
More distribution, better load balancing.

>
> > - Your data on the first subvol i.e., replica subvol - 1 will be
> > unavailable till it is copied to the other subvols
> > after removing the bricks from the cluster.
>
> Hmm, ok.  I was sure I had seen a reference at some point to a command
> for migrating data off bricks to prepare them for removal.
>
> Is there an easy way to get a list of all files which are present on a
> given brick, then, so that I can see which data would be unavailable
> during this transfer?
>
The easiest way is by doing "ls" on the back end brick.

>
> > Since arbiter bricks need not be of same size as the data bricks, if you
> > can configure three more arbiter bricks
> > based on the guidelines in the doc [1], you can do it live and you will
> > have the distribution count also unchanged.
>
> I can probably find one or more machines with a few hundred GB free
> which could be allocated for arbiter bricks if it would be sigificantly
> simpler and safer than repurposing the existing bricks (and I'm getting
> the impression that it probably would be).

Yes it is the simpler and safer way of doing that.

>   Does it particularly matter
> whether the arbiters are all on the same node or on three separate
> nodes?
>
 No it doesn't matter as long as the bricks of same replica subvol are not
on the same nodes.

Regards,
Karthik

>
> --
> Dave Sherohman
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180227/e4061b20/attachment.html>


More information about the Gluster-users mailing list