<div dir="ltr">Hi Bap,<br><div><div class="gmail_extra"><br><div class="gmail_quote">On 6 February 2017 at 07:27, pasawwa <span dir="ltr"><<a href="mailto:pasawwa@gmail.com" target="_blank">pasawwa@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<font face="Ubuntu">Hello,<br>
<br>
we just created 3 node gluster ( replica 3 arbiter 1 ) and get
"systemctl status glusterd" message:<br>
<br>
<a href="http://n1.test.net" target="_blank">n1.test.net</a> etc-glusterfs-glusterd.vol[<wbr>1458]: [2017-02-03
17:56:24.691334] C [MSGID: 106003]
[glusterd-server-quorum.c:341:<wbr>glusterd_do_volume_quorum_<wbr>action]
0-management: Server quorum regained for volume TESTp1. Starting
local bricks.<br>
<br>
How can we setup gluster quorum params to eliminate this warning
and <b>to aviod split brain </b><b>and writ</b><b>ea</b><b>ble</b>
if any single node goes down ?<br>
<br>
current settings:<br>
</font><tt><br>
</tt><tt>server.event-threads: 8</tt><tt><br>
</tt><tt>client.event-threads: 8</tt><tt><br>
</tt><tt>performance.io-thread-count: 20</tt><tt><br>
</tt><tt>performance.readdir-ahead: on</tt><tt><br>
</tt><tt>performance.quick-read: off</tt><tt><br>
</tt><tt>performance.read-ahead: off</tt><tt><br>
</tt><tt>performance.io-cache: off</tt><tt><br>
</tt><tt>performance.stat-prefetch: off</tt><tt><br>
</tt><tt>cluster.eager-lock: enable</tt><tt><br>
</tt><tt>network.remote-dio: enable</tt><tt><br>
</tt><tt><b>cluster.quorum-type: auto </b></tt><tt> #</tt><tt>
we are not shure to be 100% successfull for split brain ( update
nodes eg. )<br>
</tt><tt><b>cluster.server-quorum-type: server </b></tt><tt># it
looks to be OK</tt><tt><br>
</tt><tt>features.shard: on</tt><tt><br>
</tt><tt>cluster.data-self-heal-<wbr>algorithm: diff</tt><tt><br>
</tt><tt>storage.owner-uid: 36</tt><tt><br>
</tt><tt>storage.owner-gid: 36</tt><tt><br>
</tt><tt>server.allow-insecure: on</tt><tt><br>
</tt><tt>network.ping-timeout: 10</tt><font face="Ubuntu"><br></font></div></blockquote><div><br>For a rep 3 setup, those default quorum configurations should allow you to maintain writes & avoid split-brain should any single node fails.<br></div><div>To automate the healing process, I'd also add these to the list:<br></div><div><br>cluster.entry-self-heal: on<br>cluster.metadata-self-heal: on<br>cluster.data-self-heal: on<br><br> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF"><font face="Ubuntu">
<br>
<a class="gmail-m_4278632378061514743moz-txt-link-freetext" href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/" target="_blank">https://gluster.readthedocs.<wbr>io/en/latest/Administrator%<wbr>20Guide/arbiter-volumes-and-<wbr>quorum/</a><br>
<br>
regrads<br>
Bap.<br>
</font>
</div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a> <br></blockquote></div><br></div></div></div>