<div dir="ltr">Ok! I got it about how many disks I can lose and so on.<div>But regard the arbiter isse, I always set this parameters in the gluster volume, in order to avoid split-brain and I might add that work pretty well to me.<br>I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running different Linux distro - and Windows as well - with Cpanel and other stuff, in production.</div><div>Anyway here the parameters I had have used:</div><div><br></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">gluster vol set VMS cluster.heal-timeout 5
</span><br>gluster vol heal VMS enable
<br>gluster vol set VMS cluster.quorum-reads false
<br>gluster vol set VMS cluster.quorum-count 1
<br>gluster vol set VMS network.ping-timeout 2
<br>gluster vol set VMS cluster.favorite-child-policy mtime
<br>gluster vol heal VMS granular-entry-heal enable
<br>gluster vol set VMS cluster.data-self-heal-algorithm full
<br>gluster vol set VMS features.shard on
<br>gluster vol set VMS performance.write-behind off
<br>gluster vol set VMS performance.flush-behind off<br>
<br></span></div><div> <br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em dom., 20 de out. de 2024 às 17:34, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:"courier new",courier,monaco,monospace,sans-serif;font-size:16px"><div></div>
<div dir="ltr">If it's replica 2, you can loose up to 1 replica per distribution group.</div><div dir="ltr">For example, if you have a volume TEST with such setup:<br><br>server1:/brick1<br>server2:/brick1<br>server1:/brick2<br>server2:/brick2<br><br>You can loose any brick of the replica "/brick1" and any brick in the replica "/brick2". So if you loose server1:/brick1 and server2:/brick2 -> no data loss will be experienced.<br>As usual, consider if you can add an arbiter for your volumes.<br><br>Best Regards,<br>Strahil Nikolov<br></div><div><br></div>
</div><div id="m_4691292116739965778yahoo_quoted_9492009889">
<div style="font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:13px;color:rgb(38,40,42)">
<div>
В събота, 19 октомври 2024 г. в 18:32:40 ч. Гринуич+3, Gilberto Ferreira <<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>> написа:
</div>
<div><br></div>
<div><br></div>
<div><div id="m_4691292116739965778yiv1721844456"><div dir="ltr">Hi there.<div>I have 2 servers with this number of disks in each side:</div><div><br></div><div>pve01:~# df | grep disco<br>/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0<br>/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3<br>/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1<br>/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2<br>/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1<br>/dev/sdc 2.0T 19G 2.0T 1% /disco2TB-0<br>/dev/sdj 1.0T 9.2G 1015G 1% /disco1TB-4</div><div><br></div><div>I have a Type: Distributed-Replicate gluster</div><div>So my question is: how much disk can be in fail state after losing data or something?</div><div><br></div><div>Thanks in advance</div><div><br clear="all"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div> </div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div></div></div>
</div>________<br><br><br><br>Community Meeting Calendar:<br><br>Schedule -<br>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div>
</div>
</div></div></blockquote></div>