<html><head></head><body><div class="yahoo-style-wrap" style="font-family:courier new, courier, monaco, monospace, sans-serif;font-size:16px;"><div dir="ltr" data-setdir="false">The arbiter can help in the second scenario from <a href="https://docs.gluster.org/en/main/Administrator-Guide/Split-brain-and-ways-to-deal-with-it/" rel="nofollow" target="_blank" class="">https://docs.gluster.org/en/main/Administrator-Guide/Split-brain-and-ways-to-deal-with-it/</a> .<br><br>Best Regards,<br>Strahil Nikolov<br></div><div><br></div><div><br></div><div><br></div><div id="yahoo_quoted_0767089408" class="yahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
В понеделник, 21 октомври 2024 г. в 14:40:24 ч. Гринуич+3, Gilberto Ferreira <gilberto.nunes32@gmail.com> написа:
</div>
<div><br></div>
<div><br></div>
<div><div id="yiv5838816718"><div><div dir="ltr">Ok! I got it about how many disks I can lose and so on.<div>But regard the arbiter isse, I always set this parameters in the gluster volume, in order to avoid split-brain and I might add that work pretty well to me.<br clear="none">I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running different Linux distro - and Windows as well - with Cpanel and other stuff, in production.</div><div>Anyway here the parameters I had have used:</div><div><br clear="none"></div><div><span style="font-family:monospace;"><span style="color:rgb(0,0,0);">gluster vol set VMS cluster.heal-timeout 5
</span><br clear="none">gluster vol heal VMS enable
<br clear="none">gluster vol set VMS cluster.quorum-reads false
<br clear="none">gluster vol set VMS cluster.quorum-count 1
<br clear="none">gluster vol set VMS network.ping-timeout 2
<br clear="none">gluster vol set VMS cluster.favorite-child-policy mtime
<br clear="none">gluster vol heal VMS granular-entry-heal enable
<br clear="none">gluster vol set VMS cluster.data-self-heal-algorithm full
<br clear="none">gluster vol set VMS features.shard on
<br clear="none">gluster vol set VMS performance.write-behind off
<br clear="none">gluster vol set VMS performance.flush-behind off<br clear="none">
<br clear="none"></span></div><div> <br clear="all"><div><div dir="ltr" class="yiv5838816718gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br clear="none"></div><div><br clear="none"></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br clear="none"></div><div><p style="font-size:12.8px;margin:0px;"></p><p style="font-size:12.8px;margin:0px;"><br clear="none"></p><p style="font-size:12.8px;margin:0px;"><br clear="none"></p></div></div><div><br clear="none"></div></div></div></div></div></div></div></div><br clear="none"></div></div><br clear="none"><div id="yiv5838816718yqt89275" class="yiv5838816718yqt2426140500"><div class="yiv5838816718gmail_quote"><div dir="ltr" class="yiv5838816718gmail_attr">Em dom., 20 de out. de 2024 às 17:34, Strahil Nikolov <<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:hunter86_bg@yahoo.com" target="_blank" href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> escreveu:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;" class="yiv5838816718gmail_quote"><div><div style="font-family:courier, monaco, monospace, sans-serif;font-size:16px;"><div></div>
<div dir="ltr">If it's replica 2, you can loose up to 1 replica per distribution group.</div><div dir="ltr">For example, if you have a volume TEST with such setup:<br clear="none"><br clear="none">server1:/brick1<br clear="none">server2:/brick1<br clear="none">server1:/brick2<br clear="none">server2:/brick2<br clear="none"><br clear="none">You can loose any brick of the replica "/brick1" and any brick in the replica "/brick2". So if you loose server1:/brick1 and server2:/brick2 -> no data loss will be experienced.<br clear="none">As usual, consider if you can add an arbiter for your volumes.<br clear="none"><br clear="none">Best Regards,<br clear="none">Strahil Nikolov<br clear="none"></div><div><br clear="none"></div>
</div><div id="yiv5838816718m_4691292116739965778yahoo_quoted_9492009889">
<div style="font-family:Helvetica, Arial, sans-serif;font-size:13px;color:rgb(38,40,42);">
<div>
В събота, 19 октомври 2024 г. в 18:32:40 ч. Гринуич+3, Gilberto Ferreira <<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:gilberto.nunes32@gmail.com" target="_blank" href="mailto:gilberto.nunes32@gmail.com">gilberto.nunes32@gmail.com</a>> написа:
</div>
<div><br clear="none"></div>
<div><br clear="none"></div>
<div><div id="yiv5838816718m_4691292116739965778yiv1721844456"><div dir="ltr">Hi there.<div>I have 2 servers with this number of disks in each side:</div><div><br clear="none"></div><div>pve01:~# df | grep disco<br clear="none">/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0<br clear="none">/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3<br clear="none">/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1<br clear="none">/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2<br clear="none">/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1<br clear="none">/dev/sdc 2.0T 19G 2.0T 1% /disco2TB-0<br clear="none">/dev/sdj 1.0T 9.2G 1015G 1% /disco1TB-4</div><div><br clear="none"></div><div>I have a Type: Distributed-Replicate gluster</div><div>So my question is: how much disk can be in fail state after losing data or something?</div><div><br clear="none"></div><div>Thanks in advance</div><div><br clear="all"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br clear="none"></div><div><br clear="none"></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br clear="none"></div><div> </div><div><p style="font-size:12.8px;margin:0px;"></p><p style="font-size:12.8px;margin:0px;"><br clear="none"></p><p style="font-size:12.8px;margin:0px;"><br clear="none"></p></div></div><div><br clear="none"></div></div></div></div></div></div></div></div></div></div>
</div>________<br clear="none"><br clear="none"><br clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">Bridge: <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">Gluster-users mailing list<br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"></div>
</div>
</div></div></blockquote></div></div>
</div></div></div>
</div>
</div></div></body></html>