<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <span dir="ltr"><<a href="mailto:dave@sherohman.org" target="_blank">dave@sherohman.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-">On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:<br>
> I will try to explain how you can end up in split-brain even with cluster<br>
> wide quorum:<br>
<br>
</span>Yep, the explanation made sense. I hadn't considered the possibility of<br>
alternating outages. Thanks!<br>
<span class="gmail-"><br>
> > > It would be great if you can consider configuring an arbiter or<br>
> > > replica 3 volume.<br>
> ><br>
> > I can. My bricks are 2x850G and 4x11T, so I can repurpose the small<br>
> > bricks as arbiters with minimal effect on capacity. What would be the<br>
> > sequence of commands needed to:<br>
> ><br>
> > 1) Move all data off of bricks 1 & 2<br>
> > 2) Remove that replica from the cluster<br>
> > 3) Re-add those two bricks as arbiters<br>
> ><br>
> > (And did I miss any additional steps?)<br>
> ><br>
> > Unfortunately, I've been running a few months already with the current<br>
> > configuration and there are several virtual machines running off the<br>
> > existing volume, so I'll need to reconfigure it online if possible.<br>
> ><br>
> Without knowing the volume configuration it is difficult to suggest the<br>
> configuration change,<br>
> and since it is a live system you may end up in data unavailability or data<br>
> loss.<br>
> Can you give the output of "gluster volume info <volname>"<br>
> and which brick is of what size.<br>
<br>
</span>Volume Name: palantir<br>
Type: Distributed-Replicate<br>
Volume ID: 48379a50-3210-41b4-9a77-<wbr>ae143c8bcac0<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 3 x 2 = 6<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: saruman:/var/local/brick0/data<br>
Brick2: gandalf:/var/local/brick0/data<br>
Brick3: azathoth:/var/local/brick0/<wbr>data<br>
Brick4: yog-sothoth:/var/local/brick0/<wbr>data<br>
Brick5: cthulhu:/var/local/brick0/data<br>
Brick6: mordiggian:/var/local/brick0/<wbr>data<br>
Options Reconfigured:<br>
features.scrub: Inactive<br>
features.bitrot: off<br>
transport.address-family: inet<br>
performance.readdir-ahead: on<br>
nfs.disable: on<br>
network.ping-timeout: 1013<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
cluster.eager-lock: enable<br>
network.remote-dio: enable<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
features.shard: on<br>
cluster.data-self-heal-<wbr>algorithm: full<br>
storage.owner-uid: 64055<br>
storage.owner-gid: 64055<br>
<br>
<br>
For brick sizes, saruman/gandalf have<br>
<br>
$ df -h /var/local/brick0<br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/mapper/gandalf-gluster 885G 55G 786G 7% /var/local/brick0<br>
<br>
and the other four have<br>
<br>
$ df -h /var/local/brick0<br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/sdb1 11T 254G 11T 3% /var/local/brick0<br></blockquote><div><br></div><div>If you want to use the first two bricks as arbiter, then you need to be aware of the following things:<br></div><div>- Your distribution count will be decreased to 2.<br></div><div>- Your data on the first subvol i.e., replica subvol - 1 will be unavailable till it is copied to the other subvols<br></div><div>after removing the bricks from the cluster.<br><br></div><div>Since arbiter bricks need not be of same size as the data bricks, if you can configure three more arbiter bricks<br>based on the guidelines in the doc [1], you can do it live and you will have the distribution count also unchanged.<br><br></div><div>One more thing from the volume info; Only the options which are reconfigured will appear in the volume info output.<br>The quorum-type is in the list which says it is manually reconfigured.<br></div><div><br>[1] <a href="http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing">http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing</a><br><br>Regards,<br></div><div>Karthik<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<span class="gmail-HOEnZb"><font color="#888888"><br>
<br>
--<br>
Dave Sherohman<br>
</font></span></blockquote></div><br></div></div>