<div dir="ltr">Hi,<div><br></div><div>I dont have any more hosts available.</div><div><br></div><div>I am a bit lost here, why a replica 3 and arbiter 1? ie not replica2 arbiter1? also no distributed part? is the distributed flag automatically assumed? with a replica3 then there is a quorum (2 of 3) so no arbiter is needed? I have this running already like this so I am assuming its robust?</div><div><br></div><div>I am still struggling to undersatnd the syntax, I wish the docs / examples were better.</div><div><br></div><div>So on each gluster node I have an-unused 120gb data1 partition which is left over from the OS install so the arbiter volume could go here?</div><div><br></div><div><span style="font-size:12.8px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">in which case?</span></div><div><span style="font-size:12.8px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline"><br></span></div><div><span style="font-size:12.8px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">gluster volume create my-volume replica 2 arbiter 1 host1:/path/to/brick host2:/path/to/brick (arb-)host3:/path/to/brick2 host4:/path/to/brick host5:/path/to/brick (arb-)host6:/path/to/brick2 host3:/path/to/brick host6:/path/to/brick (arb-)host1:/path/to/brick2</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">is this a sane command?</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Otherwise maybe I am beginning to think I am better off doing 3 x 2TB separate volumes. rather interesting trying to understand this stuff...!</span><br><div class="gmail-yj6qo gmail-ajU" style="outline:none;padding:10px 0px;width:22px;margin:2px 0px 0px;font-size:12.8px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><br class="gmail-Apple-interchange-newline"></div><br></div><div> </div></div><div class="gmail_extra"><br><div class="gmail_quote">On 12 June 2018 at 23:10, Dave Sherohman <span dir="ltr"><<a href="mailto:dave@sherohman.org" target="_blank">dave@sherohman.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Tue, Jun 12, 2018 at 03:04:14PM +1200, Thing wrote:<br>
> What I would like to do I think is a,<br>
> <br>
</span>> *Distributed-Replicated volume*<br>
<span class="">> <br>
> a) have 1 and 2 as raid1<br>
> b) have 4 and 5 as raid1<br>
> c) have 3 and 6 as a raid1<br>
> d) join this as concatenation 2+2+2tb<br>
<br>
</span>You probably don't actually want to do that because quorum is handled<br>
separately for each subvolume (bricks 1/2, 4/5, or 3/6), not a single<br>
quorum for the volume as a whole. (Consider if bricks 1 and 2 both went<br>
down. You'd still have 4 of 6 bricks running, so whole-volume quorum<br>
would still be met, but the volume can't continue to run normally since<br>
the first subvolume is completely missing.)<br>
<br>
In the specific case of replica 2, gluster treats the first brick in<br>
each subvolume as "slightly more than one", so you'd be able to continue<br>
normally if brick 2, 5, or 6 went down, but, if brick 1, 4, or 3 went<br>
down, all files on that subvolume would become read-only.<br>
<span class=""><br>
> I tried to do this and failed as it kept asking for an arbiter, which the<br>
> docs simply dont mention how to do.<br>
<br>
</span><a href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/" rel="noreferrer" target="_blank">https://gluster.readthedocs.<wbr>io/en/latest/Administrator%<wbr>20Guide/arbiter-volumes-and-<wbr>quorum/</a><br>
<span class=""><br>
> So say we have,<br>
> <br>
> a) have 1 and 2 as raid1 with 3 as the arbiter?<br>
> b) have 4 and 5 as raid 1 with 6 as the arbiter<br>
> c) 3 and 6 as a raid 1 with 5 as the arbiter<br>
> d) join this as concatenation 2+2+2tb<br>
<br>
</span>I would recommend finding one or more other servers with small amounts<br>
of unused space and allocating the arbiter bricks there, or carving a<br>
gig or two out of your current bricks for that purpose. Arbiters only<br>
need about 4k of disk space per file in the subvolume, regardless of the<br>
actual file size (the arbiter only stores metadata), so TB-sized<br>
arbiters would be a huge waste of space, especially if you're only<br>
putting a few very large files (such as VM disk images) on the volume.<br>
<br>
As a real-world data point, I'm using basically the setup you're aiming<br>
for - six data bricks plus three arbiters, used to store VM disk images.<br>
My data bricks are 11T each, while my arbiters are 98G. Disk usage for<br>
the volume is currently at 19%, but all arbiters are under 1% usage (the<br>
largest has 370M used). Assuming my usage patterns don't change, I<br>
could completely fill my 11T subvolumes and only need about 1.5G in the<br>
corresponding arbiters.<br>
<span class=""><br>
> if so what is the command used to build this?<br>
<br>
</span># gluster volume create my-volume replica 3 arbiter 1 host1:/path/to/brick host2:/path/to/brick arb-host1:/path/to/brick host4:/path/to/brick host5:/path/to/brick arb-host2:/path/to/brick host3:/path/to/brick host6:/path/to/brick arb-host3:/path/to/brick<br>
<span class="HOEnZb"><font color="#888888"><br>
-- <br>
Dave Sherohman<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</font></span></blockquote></div><br></div>